You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The goal of each integration’s documentation is to:
9
9
10
-
*Help the reader understand the benefits the integration offers and how Elastic can help with their use case. Inform the reader of any requirements, including system compatibility, supported versions of third-party products, permissions needed, and more.
11
-
*Provide a comprehensive list of collected fields and the data and metric types for each. The reader can reference this information while evaluating the integration, interpreting collected data, or troubleshooting issues.
12
-
*Set the reader up for a successful installation and setup by connecting them with any other resources they’ll need.
13
-
* Each integration document should contain several sections, and you should use consistent headings to make it easier for a single user to evaluate and use multiple integrations.
*Describe the benefits the integration offers and how Elastic can help with different use cases.
11
+
*Specify requirements, including system compatibility, supported versions of third-party products, permissions needed, and more.
12
+
*Provide a list of collected fields, including data and metric types for each field. This information is useful while evaluating the integration, interpreting collected data, or troubleshooting issues.
13
+
* Each integration document should contain the following sections
14
+
15
+
*[Overview](#idg-docs-overview)
16
+
*[What data does this integration collect?](#idg-data-collected)
17
+
*[What do I need to use this integration?](#idg-requirements)
18
+
*[How do I deploy this integration?](#idg-docs-setup)
19
+
*[Troubleshooting](#idg-docs-troubleshooting)
20
+
*[Performance and scaling](#idg-docs-performance-scaling)
21
+
*[Reference](#idg-docs-reference)
22
22
23
23
Some considerations when these documentation files are written at `_dev/build/docs/*.md`:
24
24
@@ -32,245 +32,73 @@ Some considerations when these documentation files are written at `_dev/build/do
32
32
33
33
* In the documentation files (`_dev/build/docs/*.md`), `{{ url "getting-started-observability" "Elastic guide" }}` generates a link to the Observability Getting Started guide.
34
34
35
+
### Overview [idg-docs-overview]
35
36
37
+
The **Overview** section explains what the integration does, what the main uses cases are, and contains the following subsections:
36
38
37
-
### Overview [idg-docs-guidelines-overview]
38
-
39
-
The overview section explains what the integration is, defines the third-party product that is providing data, establishes its relationship to the larger ecosystem of Elastic products, and helps the reader understand how it can be used to solve a tangible problem.
40
-
41
-
The overview should answer the following questions:
42
-
43
-
* What is the integration?
44
-
* What is the third-party product that is providing data?
45
-
* What can you do with it?
46
-
47
-
* General description
48
-
* Basic example
49
-
50
-
51
-
52
-
#### Template [_template]
53
-
54
-
Use this template language as a starting point, replacing `<placeholder text>` with details about the integration:
55
-
56
-
```text
57
-
The <name> integration allows you to monitor <service>. <service> is <definition>.
58
-
59
-
Use the <name> integration to <function>. Then visualize that data in Kibana, create alerts to notify you if something goes wrong, and reference <data stream type> when troubleshooting an issue.
60
-
61
-
For example, if you wanted to <use case> you could <action>. Then you can <visualize|alert|troubleshoot> by <action>.
62
-
```
63
-
64
-
65
-
#### Example [_example]
66
-
67
-
```text
68
-
The AWS CloudFront integration allows you to monitor your AWS CloudFront usage. AWS CloudFront is a content delivery network (CDN) service.
69
-
70
-
Use the AWS CloudFront integration to collect and parse logs related to content delivery. Then visualize that data in Kibana, create alerts to notify you if something goes wrong, and reference logs when troubleshooting an issue.
71
-
72
-
For example, you could use the data from this integration to know when there are more than some number of failed requests for a single piece of content in a given time period. You could also use the data to troubleshoot the underlying issue by looking at additional context in the logs like the number of unique users (by IP address) who experienced the issue, the source of the request, and more.
73
-
```
74
-
39
+
***Compatibility**
75
40
76
-
### Datastreams [idg-docs-guidelines-datastreams]
41
+
Indicates which versions, deployment methods, or architectures of the third party software this integration compatible with.
77
42
78
-
The data streams section provides a high-level overview of the kind of data that is collected by the integration. This is helpful since it can be difficult to quickly derive an understanding from just the reference sections (since they’re so long).
43
+
***How it works**
79
44
80
-
The data streams section should include:
45
+
Provides a high-level overview on how the integration collects data.
81
46
82
-
* A list of the types of data streams collected by the integration
83
-
* A summary of each type of data stream included and a link to the relevant reference section:
47
+
### What data does this integration collect? [idg-data-collected]
84
48
85
-
* Logs
86
-
* Metrics
49
+
This section should include:
87
50
88
-
* Notes (optional)
51
+
* The types of data collected by the integration
52
+
* Supported use cases
89
53
54
+
### What do I need to use this integration? [idg-requirements]
90
55
91
-
#### Template [_template_2]
92
-
93
-
Use this template language as a starting point, replacing `<placeholder text>` with details about the integration:
94
-
95
-
```text
96
-
## Data streams
97
-
98
-
The <name> integration collects two types of data streams: logs and metrics.
99
-
100
-
**Logs** help you keep a record of events happening in <service>.
101
-
Log data streams collected by the <name> integration include <select data streams>, and more. See more details in the <Logs reference>.
102
-
103
-
**Metrics** give you insight into the state of <service>.
104
-
Metric data streams collected by the <name> integration include <select data streams> and more. See more details in the [Metrics]<#metrics-reference>.
105
-
106
-
<!-- etc. -->
107
-
108
-
<!-- Optional notes -->
109
-
```
110
-
111
-
112
-
#### Example [_example_2]
113
-
114
-
```text
115
-
The System integration collects two types of data: logs and metrics.
116
-
117
-
Logs help you keep a record of events that happen on your machine. Log data streams collected by the System integration include application, system, and security events on machines running Windows or auth and syslog events on machines running macOS or Linux. See more details in the Logs reference.
118
-
119
-
Metrics give you insight into the state of the machine. Metric data streams collected by the System integration include CPU usage, load statistics, memory usage, information on network behavior, and more. See more details in the Metrics reference.
120
-
121
-
You can enable and disable individual data streams. If all data streams are disabled and the System integration is still enabled, Fleet uses the default data streams.
The requirements section helps readers to confirm that the integration will work with their systems.
56
+
This section indicates what is required to use this integration:
128
57
129
58
* Elastic prerequisites (for example, a self-managed or Cloud deployment)
130
-
* System compatibility
131
-
* Supported versions of third-party products
132
-
* Permissions needed
133
-
* Anything else that could block a user from successfully using the integration
134
-
135
-
136
-
#### Template [_template_3]
59
+
* Credentials or an admin account for the third-party software
137
60
138
-
Use this template language as a starting point, including any other requirements for the integration:
61
+
### How do I deploy this integration? [idg-docs-setup]
139
62
140
-
```text
141
-
## Requirements
63
+
This section refers to the Observability [Getting started guide](docs-content://solutions/observability/get-started.md) for generic, step-by-step instructions, and should also include the following additional setup instructions:
142
64
143
-
You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it.
144
-
You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.
65
+
**Onboard and configure**
145
66
146
-
<!-- Other requirements -->
147
-
```
67
+
* How do I install the Agent and deploy this integration?
68
+
* Which agent deployment methods are acceptable? Fleet? Standalone?
69
+
* Is agentless deployment supported for this integration?
70
+
* What data, input, fields, or authentication tokens must be configured during integration deployment? What values should they have?
148
71
72
+
**Validation**
149
73
150
-
#### Example [_example_3]
151
-
152
-
```text
153
-
You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.
154
-
155
-
Each data stream collects different kinds of metric data, which may require dedicated permissions to be fetched and may vary across operating systems. Details on the permissions needed for each data stream are available in the Metrics reference.
156
-
```
157
-
158
-
For a much more detailed example, refer to the [AWS integration requirements](https://github.com/elastic/integrations/blob/main/packages/aws/_dev/build/docs/README.md#requirements).
159
-
160
-
161
-
### Setup [idg-docs-guidelines-setup]
162
-
163
-
The setup section points the reader to the Observability [Getting started guide](docs-content://solutions/observability/get-started.md) for generic, step-by-step instructions.
164
-
165
-
This section should also include any additional setup instructions beyond what’s included in the guide, which may include instructions to update the configuration of a third-party service. For example, for the Cisco ASA integration, users need to configure their Cisco device following the [steps found in the Cisco documentation](https://documentation.meraki.com/General_Administration/Monitoring_and_Reporting/Syslog_Server_Overview_and_Configuration#Configuring_a_Syslog_Server).
74
+
* How can I test whether the integration is working? Include example commands or test files if applicable.
166
75
167
76
::::{note}
168
77
When possible, use links to point to third-party documentation for configuring non-Elastic products since workflows may change without notice.
169
78
::::
170
79
80
+
### Troubleshooting [idg-docs-troubleshooting]
171
81
82
+
The troubleshooting section should include details specific to each input type, along with general guidance for resolving common issues encountered when deploying this integration. Whenever possible, link to the troubleshooting documentation provided by the third-party software.
172
83
173
-
#### Template [_template_4]
174
-
175
-
Use this template language as a starting point, including any other setup instructions for the integration:
176
-
177
-
```text
178
-
## Setup
179
-
180
-
<!-- Any prerequisite instructions -->
181
-
182
-
For step-by-step instructions on how to set up an integration, see the
Before sending logs to Elastic from your Cisco device, you must configure your device according to <<Cisco's documentation on configuring a syslog server>>.
193
-
194
-
After you've configured your device, you can set up the Elastic integration. For step-by-step instructions on how to set up an integration, see the <<Getting started>> guide.
The troubleshooting section is optional. It should contain information about special cases and exceptions that aren’t necessary for getting started or won’t be applicable to all users.
84
+
### Performance and scaling [idg-docs-performance-scaling]
201
85
86
+
Based on the input, this section should explain how to scale the integration and what are the best types of scaling architecture to use, including benchmarking recommendations.
202
87
203
-
#### Template [_template_5]
88
+
###Reference [idg-docs-reference]
204
89
205
-
There is no standard format for the troubleshooting section.
90
+
There can be any number of reference sections, for example:
206
91
207
-
208
-
#### Example [_example_5]
209
-
210
-
```text
211
-
>Note that certain data streams may access `/proc` to gather process information,
212
-
>and the resulting `ptrace_may_access()` call by the kernel to check for
213
-
>permissions can be blocked by
214
-
>[AppArmor and other LSM software](https://gitlab.com/apparmor/apparmor/wikis/TechnicalDoc_Proc_and_ptrace), even though the System module doesn't use `ptrace` directly.
215
-
>
216
-
>In addition, when running inside a container the proc filesystem directory of the host
217
-
>should be set using `system.hostfs` setting to `/hostfs`.
218
-
```
219
-
220
-
221
-
### Reference [idg-docs-guidelines-reference]
222
-
223
-
Readers might use the reference section while evaluating the integration, interpreting collected data, or troubleshooting issues.
224
-
225
-
There can be any number of reference sections (for example, `## Metrics reference`, `## Logs reference`). Each reference section can contain one or more subsections, such as one for each individual data stream (for example, `### Access Logs` and `### Error logs`).
92
+
* ECS Field Reference
93
+
* Metrics reference
94
+
* Logs reference
95
+
* Inputs used in this integration
96
+
* APIs used to collect data
97
+
* Changelog
226
98
227
99
Each reference section should contain detailed information about:
228
100
229
-
* A list of the log or metric types we support within the integration and a link to the relevant third-party documentation.
101
+
* A list of the log or metric types supported within the integration and a link to the relevant third-party documentation.
230
102
* (Optional) An example event in JSON format.
231
103
* Exported fields for logs, metrics, and events with actual types (for example, `counters`, `gauges`, `histograms` vs. `longs` and `doubles`). Fields should be generated using the instructions in [Fine-tune the integration](https://github.com/elastic/integrations/blob/main/docs/fine_tune_integration.md).
232
-
* ML Modules jobs.
233
-
234
-
235
-
#### Template [_template_6]
236
-
237
-
```text
238
-
<!-- Repeat for both Logs and Metrics if applicable -->
239
-
## <Logs|Metrics> reference
240
-
241
-
<!-- Repeat for each data stream of the current type -->
242
-
## <Data stream name>
243
-
244
-
The `<data stream name>` data stream provides events from <source> of the following types: <list types>.
245
-
246
-
<!-- Optional -->
247
-
<!-- #### Example -->
248
-
<!-- An example event for `<data stream name>` looks as following: -->
249
-
<!-- <code block with example> -->
250
-
251
-
### Exported fields
252
-
253
-
<insert table>
254
-
```
255
-
256
-
257
-
#### Example [_example_6]
258
-
259
-
```text
260
-
>## Logs reference
261
-
>
262
-
>### PAN-OS
263
-
>
264
-
>The `panos` data stream provides events from Palo Alto Networks device of the following types: [GlobalProtect](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/globalprotect-log-fields), [HIP Match](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/hip-match-log-fields), [Threat](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields), [Traffic](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/traffic-log-fields) and [User-ID](https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/user-id-log-fields).
0 commit comments