F5 SP AVR + Big Data Training - Index¶
Welcome¶
Welcome to F5’s Service Provider AVR and Big Data hands on training series. The intended audience for these labs are Service Provider engineers that would like to leverage the power of F5 data visibility and integrate this immense data capability into open source tools such as Elasticsearch, Hadoop and others.
Getting Started¶
Please follow the instructions provided by this documentation to start your lab and access your lab.
Note
All work for this lab will be performed exclusively from the Linux Jumphost and Linux Client Machines. All required access and servies needed to perform classes and labs are provided by the UDF. No installation or interaction with your local system is required.
Prerequisites¶
In order to complete this series of training classes you will need to utilize the provided blueprint for the course session. To access the UDF sessions you will need to have the following prerequists met.
- Current Access to UDF
- SSH key of your access machine in UDF
- Windows or MAC ssh client working with UDF
All pre-built environments implement the lab-topology shown below.
UDF Blueprint¶
Please follow the instructions provided by your lab instructor to access your lab environment. The lab environment will be delivered via UDF blueprints to each student.
Note
Please deploy and start your lab as soon as you have access to the class as the lab takes some time to boot all the components.
Lab Topology¶
The network topology implemented for this lab based on the Service Provider Gi lan path. The focus of the lab is Control Plane programmability and Data Plane elements, so this lab will focus at both parts at different time. The following components have been included in your lab environment:
- 1 x
F5 BIG-IP VE
(v13.0 HF2) - 1 x Linux Jumphost (ubuntu 16.04 - mate)
- 2 x Linux Clients (ubuntu 16.04 - mate)
- 1 x Linux Server (ubuntu 16.04)
The following table lists VLANS, IP Addresses and Credentials for all components:
Component | VLAN | IP Address | Credentials |
---|---|---|---|
Linux Jumphost | Mgmt | 10.1.1.20 | |
BIG-IP | Mgmt | 10.1.1.4 | admin/admin |
Internal | 10.1.10.5 | ||
External | 10.1.20.5 | ||
Control | 10.1.30.5 | ||
Client 00 | Mgmt | 10.1.1.9 | udfclient/S3rv1ceP0weR |
Internal | 10.1.10.25 | ||
Client 01 | Mgmt | 10.1.1.7 | udfclient/S3rv1ceP0weR |
Internal | 10.1.10.30 | ||
ELK Stack | Mgmt | 10.1.1.5 | ubuntu/default |
Control | 10.1.30.15 |
Class 1: BIG-IP AVR (BIG-IP Goodness)¶
This class covers the following topics:
- Module 1
- REST API Basics
- Module 2
- F5 BIG-IP AVR
- Configuring AVR
- Navigating AVR
- Modify AVR Reports
Expected time to complete: 30 mins
Module 1: REST API Basics¶
In this module you will learn the basic concepts required to interact with the BIG-IP iControl REST API. Additionally, you will walk through a typical Device navigation.
This is a cut down version of the F5 Programmibility Super Net Ops training.
Note
The Lab Deployment for this lab includes a single BIG-IP devices. For most of the labs we will configuring the BIG-IP device (management IP and licensing have already been completed).
Note
It’s beneficial to have GUI/SSH sessions open to BIG-IP devices while going through this lab. Feel free to verify the actions taken in the lab against the GUI or SSH. You can also watch the following logs:
- BIG-IP:
- /var/log/ltm
- /var/log/restjavad.0.log
Lab 1.1: Exploring the iControl REST API¶
Task 1 – Explore the API using the TMOS Web Interface¶
In this lab we will explore the API using an interface that is built-in to TMOS. This utility is useful for understanding how TMOS objects map to the REST API. The interfaces implement full Create, Read, Update and Delete (CRUD) functionality, however, in most practical use cases it’s far easier to use this interface as a ‘Read’ tool rather than trying to Create objects directly from it. It’s usually far easier to use TMUI or TMSH to create the object as needed and then use this tool to view the created object with all the correct attributes already populated.
- Open Google Chrome and navigate to the following bookmarks: BIG-IP
Navigate to the URL https://10.1.1.4/mgmt/toc (or click the BIG-IP REST TOC bookmark). The ‘/mgmt/toc’ path in the URL is available on all TMOS versions 11.6 or newer.
Authenticate to the interface using the default admin/admin credentials.
You will now be presented with a top-level list of various REST resources. At the top of the page there is a search box
that can be used to find items on the page. Type ‘net’ in the search box and then click on the ‘net’ link under iControl REST Resources:
Find the
/mgmt/tm/net/route-domain
Collection and click it.You will now see a listing of the Resources that are part of the route-domain(s) collection. As you can see the default route domain of
0
is listed. You can also create new objects by clicking thebutton. Additionally resources can be deleted using the
button or edited using the
button.
Click the
0
resource to view the attributes of route-domain 0 on the device:Take note of the full path to the resource. Here is how the path is broken down:
/ mgmt / tm / net / route-domain / ~Common~0 | Root | OC | OC | Collection | Resource *OC=Organizing Collection
Lab 1.2: REST API Authentication & ‘example’ Templates¶
One of the many basic concepts related to interaction with REST API’s is how a particular consumer is authenticated to the system. BIG-IP supports two types of authentication: HTTP BASIC and Token based. It’s important to understand both of these authentication mechanisms, as consumers of the API will often make use of both types depending on the use case. This lab will demonstrate how to interact with both types of authentication.
Task 1 - Import the Postman Collection & Environment¶
In this task you will Import a Postman Collection & Environment for this lab. Perform the following steps to complete this task:
Open the Postman tool by clicking the
icon of the desktop of your Linux Jumphost
Click the ‘Import’ button in the top left of the Postman window
Click the ‘Import from Link’ tab. Paste the following URL into the text box and click ‘Import’
https://raw.githubusercontent.com/jarrodlucia/bigip_elk_server/develop/postman_collections/F5_Automation_Orchestration_Intro.postman_collection.json
You should now see a collection named ‘F5 Automation & Orchestration Intro’ in your Postman Collections sidebar:
Import the Environment file by clicking ‘Import’ -> ‘Import from Link’ and pasting the following URL and clicking ‘Import’:
https://raw.githubusercontent.com/jarrodlucia/bigip_elk_server/develop/postman_collections/INTRO_Automation_Orchestration_Lab.postman_environment.json
To assist in multi-step procedures we make heavy use of the ‘Environments’ capability in Postman. This capability allows us to set various global variables that are then substituted into a request before it’s sent. Set your environment to ‘INTRO - Automation & Orchestration Lab’ by using the menu at the top right of your Postman window:
Task 2 – HTTP BASIC Authentication¶
In this task we will use the Postman tool to send API requests using
HTTP BASIC authentication. As its name implies this method of
authentication encodes the user credentials via the existing BASIC
authentication method provided by the HTTP protocol. The mechanism this
method uses is to insert an HTTP header named ‘Authorization’ with a
value that is built by Base 64 encoding the string
<username>:<password>
. The resulting header takes this form:
Authorization: Basic YWRtaW46YWRtaW4=
It should be noted that cracking the method of authentication is TRIVIAL; as a result API calls should always be performed using HTTPS (F5 default) rather than HTTP.
Perform the following steps to complete this task:
Click the ‘Collections’ tab on the left side of the screen, expand the ‘F5 Automation & Orchestration Intro’ collection on the left side of the screen, expand the ‘Lab 1.2 – API Authentication’ folder:
Click the ‘Step 1: HTTP BASIC Authentication’ item. Click the ‘Authorization’ tab and select ‘Basic Auth’ as the Type. Fill in the username and password (admin/admin) and click the ‘Update Request’ button. Notice that the number of Headers in the Headers tab changed from 1 to 2. This is because Postman automatically created the HTTP header and updated your request to include it. Click the ‘Headers’ tab and examine the HTTP header:
Click the ‘Send’ button to send the request. If the request succeeds you should be presented with a listing of the
/mgmt/tm/ltm
Organizing Collection.Update the credentials and specify an INCORRECT password. Send the request again and examine the response:
Task 3 – Token Based Authentication¶
One of the disadvantages of BASIC Authentication is that credentials are sent with each and every request. This can result in a much greater attack surface being exposed unnecessarily. As a result Token Based Authentication (TBA) is preferred in many cases. This method only sends the credentials once, on the first request. The system then responds with a unique token for that session and the consumer then uses that token for all subsequent requests. Both BIG-IP and iWorkflow support token-based authentication that drops down to the underlying authentication subsystems available in TMOS. As a result the system can be configured to support external authentication providers (RADIUS, TACACS, AD, etc) and those authentication methods can flow through to the REST API. In this task we will demonstrate TBA using the local authentication database, however, authentication to external providers is fully supported.
For more information about external authentication providers see the section titled “About external authentication providers with iControl REST” in the iControl REST API User Guide available at https://devcentral.f5.com
Perform the following steps to complete this task:
Click the ‘Step 2: Get Authentication Token’ item in the Lab 1.2 Postman Collection
Notice that we send a POST request to the
/mgmt/shared/authn/login
endpoint.Click the ‘Body’ tab and examine the JSON that we will send to BIG-IP to provide credentials and the authentication provider:
Modify the JSON body and add the required credentials (admin/admin). Then click the ‘Send’ button.
Examine the response status code. If authentication succeeded and a token was generated the response will have a 200 OK status code. If the status code is 401 then check your credentials:
Successful:
Unsuccessful:
Once you receive a 200 OK status code examine the response body. The various attributes show the parameters assigned to the particular token. Find the ‘token’ attribute and copy it into your clipboard (Ctrl+c) for use in the next step:
Click the ‘Step 3: Verify Authentication Works’ item in the Lab 1.2 Postman collection. Click the ‘Headers’ tab and paste the token value copied above as the VALUE for the
X-F5-Auth-Token
header. This header is required to be sent on all requests when using token based authentication.Click the ‘Send’ button. If your request is successful you should see a ‘200 OK’ status and a listing of the
ltm
Organizing Collection.We will now update your Postman environment to use this auth token for the remainder of the lab. Click the Environment menu in the top right of the Postman window and click ‘Manage Environments’:
Click the ‘INTRO – Automation & Orchestration Lab’ item:
Update the value for
bigip_a_auth_token
by Pasting (Ctrl-v) in your auth token:Click the ‘Update’ button and then close the ‘Manage Environments’ window. Your subsequent requests will now automatically include the token.
Click the ‘Step 4: Set Authentication Token Timeout’ item in the Lab 1.2 Postman collection. This request will PATCH your token Resource (check the URI) and update the timeout attribute so we can complete the lab easily. Examine the request type and JSON Body and then click the ‘Send’ button. Verify that the timeout has been changed to ‘36000’ in the response:
Task 4 – Get a pool ‘example’ Template¶
In order to assist with REST API interactions you can request a template of the various attributes of a Resource type in a Collection. This template can then be used as the body of a POST, PUT or PATCH request as needed.
Perform the following steps:
Click the ‘Step 5: Get ‘example’ of a Pool Resource’ item in the Lab 1.2 Postman collection
Examine the URI. Notice the addition of example at the end of the collection name:
Click ‘Send’ and examine the FULL response. You will see descriptions and then all the attributes for the Pool resource type. The response also shows the default values for the attributes if applicable:
Module 2: F5 Application Visibility and Reporting¶
In this module we will explore how to configure and use F5’s Application Visibility and Reporting to provide application reporting. Analytics (also called Application Visibility and Reporting) is a module on the BIG-IP® system that you can use to analyze the performance of services and applications. It provides detailed metrics such as transactions per second, server and client latency, request and response throughput, and sessions. You can view metrics for applications, virtual servers, pool members, URLs, specific countries, and additional detailed statistics about application traffic running through the BIG-IP system.
The labs in the module will focus on the high level features of AVR. These will include Analytics profiles, configuration and navigation of the AVR reports and information generated.
The BIG-IP in the lab is preconfigured with DNS / PEM / and AFM provisioned and configured. Please explore the current f5 config to familarise yourself with this lab.
Confirm the following main config items to verify your BIG-IP lab is on working order:
- Checked Provisioned Modules refelcts the below image (DNS / PEM / AFM and AVR).
- Check VLAN setup. Make sure Interval VLAN is set to source (SP DAG).
- Verify SELF-IP’s and routes are present.
- Check that PeM Data plane is setup, you should see four PEM data plane VS as below.
- Check that DNS Listener is configured.
Note
Explore the rest of the configuration. Please look at the DNS setup (cache / monitor) and AFM CGNAT (NAPT) configurations.
Lab 2.1: Configure AVR Profiles¶
An Analytics profile is a set of definitions that determines the circumstances under which the system gathers, logs, notifies, and graphically displays information regarding traffic to an application or service. The Analytics module requires that you select an Analytics profile for each application you want to monitor. You associate the Analytics profile with one or more virtual servers used by the application / service.
- In the Analytics profile, you customize:
- What statistics to collect
- Where to collect data (locally, remotely, or both)
- Whether to capture the traffic itself
- Whether to send notifications.
Task 1 - Import the Postman Collection & Environment¶
In this task you will Import a Postman Collection & Environment for this lab. Perform the following steps to complete this task:
- Open the Postman tool by clicking the
icon of the desktop of your Linux Jumphost (Postman should be open from previous Lab)
- Click the ‘Import’ button in the top left of the Postman window
Click the ‘Import from Link’ tab. Paste the following URL into the text box and click ‘Import’
https://raw.githubusercontent.com/jarrodlucia/bigip_elk_server/develop/postman_collections/SP Modules.postman_collection.json
You should now see a collection named ‘SP Modules’ in your Postman Collections sidebar:
Import the Environment file by clicking ‘Import’ -> ‘Import from Link’ and pasting the following URL and clicking ‘Import’:
https://raw.githubusercontent.com/jarrodlucia/bigip_elk_server/develop/postman_collections/F5 SPDevOps.postman_environment.json
Task 2 – Configure TCP Analytics¶
In this task we will query and configure TCP AVR profile. This will be done using REST API (explored in previous Lab)
Perform the following steps to complete this task:
Click the ‘TCP Analytics’ item in the SP Module Postman Collection
Notice that we are sending a GET request to the
/mgmt/tm/ltm/profile/tcp-analytics
endpoint. Check the body returned and observer the default values.Click on the ‘Create TCP Analytics Profile’ , check the body message for ELK_PEM_Publisher (We will use the PEM index in ELK for logging TCP Optimisation)
Verify in BIG-IP TMUI that the new profile was created.
Add in the VS manually (This is not available in REST API currently)
Task 3 – Configure PEM Analytics¶
In this task we will query and configure PEM AVR profile. This will be done using REST API (explored in previous Lab)
Perform the following steps to complete this task:
Click the ‘PEM’ item in the SP Module Postman Collection
Notice there are two sections we must update Global and Classification. We will do Global first, click on ‘Request PEM Global Analytics Options’ we are sending a GET request to the
/mgmt/tm/pem/global-settings/analytics
endpoint. Check the body returned and observer the default values.Click on the ‘Update PEM Global Analytics Options - External Logging’ , check the body message for ELK_PEM_Publisher.
Verify in BIG-IP TMUI that the new updates where changed in PEM global options.
Click on ‘Request PEM Classification Profile’ we are sending a GET request to the
/mgmt/tm/ltm/profile/classification/classification_pem
endpoint. Check the body returned and observer the default values.Click on the ‘Update PEM Classification Profile’ , check the body message for ELK_PEM_Publisher.
Verify in BIG-IP TMUI that the new updates where changed in PEM Classification.
Task 4 – Configure AFM Analytics¶
In this task we will query and configure AFM AVR profile and Logging. This will be done using REST API (explored in previous Lab)
Perform the following steps to complete this task:
Click the ‘AFM’ item in the SP Module Postman Collection
Notice there are two sections we must update Security Reporting and Event Logging. We will do Security Reporting first, click on ‘Request AFM Security Reporting Settings’ we are sending a GET request to the
/mgmt/tm/security/analytics/settings
endpoint. Check the body returned and observer the default values.Click on the ‘Update AFM Security Reporting Settings’ , check the body message for ELK_AFM_Publisher.
Verify in BIG-IP TMUI that the new updates where changed in AFM Report Settings.
Note
Request AFM Device DoS Settings - Can be used to report on settings currently set, however REST API cannot be used to update these settings at this time.
Click on ‘Request AFM Event Logger’ we are sending a GET request to the
/mgmt/tm/security/log/profile/
endpoint. Check the body returned and observer the default values.Click on the ‘Create AFM Event Log Profile’ , check the body message for ELK_AFM_Publisher.
Addtional Steps are required for AFM as not all REST commands can configure all sections at this time. Go to TMUI on BIG-IP and navigate to Security / Event Logs / Logging Profiles. Change Publishers and tick events to log.
Task 5 – Configure DNS Analytics¶
In this task we will query and configure DNS AVR profile. This will be done using REST API (explored in previous Lab)
Perform the following steps to complete this task:
Click the ‘DNS’ item in the SP Module Postman Collection
Notice that we are sending a GET request to the
/mgmt/tm/ltm/profile/dns-logging
endpoint. Check the body returned and observer the default values.Click on the ‘Create DNS Log Profile’ , check the body message for ELK_DNS_Publisher.
Verify in BIG-IP TMUI that the new profile was created.
Lab 2.2: Access Clients and Generate Traffic¶
In this lab you will walk through re-configuring the Clients to USE the F5 for traffic. This will generate traffic for PEM / DNS / and AFM for AVR and logging to ELK Stack.
Task 1 - Configure Client Netwoking & Traffic Generation¶
In this task we will configure and use the Client UDF machines. These Clients are required to be reconfigured to utilise the network and DNS from the F5..
Perform the following steps to complete this task:
Click the on the RDP access for UDF for each client.
Click on the networking script (this will prompt for Sudo password)
Once the script has completed, check netstat -nr and nslookup to verify you have traffic passing the F5.
Verify in BIG-IP TMUI that you see traffic on the F5 VS’s
Apply the same fix for the other client.
Once both clients are fixed, generate traffic by opening applications and webpages (Leave the applications open so traffic generation continues)
Class 2: Introduction to ELK Stack (ELK Coolness)¶
This class covers the following topics:
- ELK Stack Overview
- ELK Stack build on Ubuntu
- F5 logging to ELK Stack
- ELK Stack:
- Indexes
- Navigation
- Searches
- Visualisations
- Dashboards
Expected time to complete: 1.5 hours
Module 1: ELK Stack Build Ubuntu Server¶
- ELK stack from the previous module is made up of three key components:
- Logstash,
- Elasticsearch,
- Kibana.
- There are many ready made ELK stack services that can be used:
However, it is important to understand how the ELK stack is build, the configuration files and their purposes.
This module will guide you through the installation of ELK stack onto a ubuntu server.
External Reference Documentation:
https://www.elastic.co/guide/index.html
Lab 1.1: Install the Ubuntu Base¶
In this lab you will walk through installing the ubuntu base ready for ELK stack
Task 1 - GIT Clone Repo onto the Server¶
git clone https://github.com/jarrodlucia/bigip_elk_server <directory of choice>
Task 2 - Install additional software required for ELK Stack¶
sudo apt-get install software-properties-common
sudo apt install curl
Task 3 - Add and install Java¶
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-installer
Accept the Oracle License Agreement
Fix Below for Java8 Error (If Required)
sudo apt-get -y install oracle-java8-installer
sudo sed -i 's|JAVA_VERSION=8u144|JAVA_VERSION=8u152|' oracle-java8-installer.*
sudo sed -i 's|PARTNER_URL=http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/|PARTNER_URL=http://download.oracle.com/otn-pub/java/jdk/8u152-b16/aa0333dd3019491ca4f6ddbe78cdb6d0/|' oracle-java8-installer.*
sudo sed -i 's|SHA256SUM_TGZ="e8a341ce566f32c3d06f6d0f0eeea9a0f434f538d22af949ae58bc86f2eeaae4"|SHA256SUM_TGZ="218b3b340c3f6d05d940b817d0270dfe0cfd657a636bad074dcabe0c111961bf"|' oracle-java8-installer.*
sudo sed -i 's|J_DIR=jdk1.8.0_144|J_DIR=jdk1.8.0_152|' oracle-java8-installer.*
sudo apt-get -y install oracle-java8-installer
Lab 1.2: Install Elasticsearch¶
This lab will install the Elasticsearch component, It is recommended to install Elasticsearch as the first module.
Task 1 Install Repo and Keys¶
- Download and install the public signing key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
- Save the repository definition to /etc/apt/sources.list.d/elastic-5.x.list:
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt-get update
Task 2 Install elasticseach and setup system¶
- Install Elasticsearch
sudo apt-get install elasticsearch
- Edit config file to change bind address to Host address 10.1.1.5
sudo vi /etc/elasticsearch/elasticsearch.yml
- Install additional plugins
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
- Restart Elastic Search
sudo systemctl restart elasticsearch
- Configure the system to start at boot
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service
- Checking Start / Stop / Status
sudo systemctl start elasticsearch.service
sudo systemctl stop elasticsearch.service
sudo systemctl status elasticsearch.service
Lab 1.3: Install Kibana¶
In this lab we will install Kibana
Task 1 Install Kibana¶
Install Kibana
sudo apt-get install kibana
Change config file to set Outside IP address
sudo vi /etc/kibana/kibana.yml
Note
Kibana is served by a back end server. This setting specifies the port to use. Server port is set as default Kibana Port 5601. Server host should be set to the UDF Management IP address 10.1.1.5 as we will be accessing this via the Linux Jumphost. The URL of the Elasticsearch instance to use for all your queries.
- server.port: 5601
- server.host: “10.1.1.5”
- elasticsearch.url: “http://10.1.1.5:9200”
Kibana restart
sudo systemctl restart kibana.service
To configure Kibana to start automatically when the system boots up, run the following commands:
sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable kibana.service
Kibana Control
sudo systemctl start kibana.service sudo systemctl stop kibana.service
Check Kibana is running via command-line:
- Access Kibana via Linux Jumpbox to verify access
Lab 1.4: Install Logstash¶
Install Logstash
Task 1 - Install Logstah¶
- Install Logstash
sudo apt-get install logstash
- Install Additional Plugins
sudo /usr/share/logstash/bin/logstash-plugin install logstash-filter-dns
sudo /usr/share/logstash/bin/logstash-plugin install logstash-filter-geoip
Note
Be patient with plugin install it can take a few moments
- Copy or Create new file to Directory /etc/logstash/conf.d/
sudo cp <git clone directory>/config_files/logstash.conf /etc/logstash/conf.d/logstash.conf
sudo vi /etc/logstash/conf.d/logstash.conf
- Logstash restart
sudo systemctl restart logstash.service
- Check logstash started correctly with no errors from logstash.conf file
- To configure Logstash to start automatically when the system boots up, run the following commands:
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable logstash.service
- Logstash Control
sudo systemctl start logstash.service
sudo systemctl stop logstash.service
sudo systemctl status logstash.service
logstash.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 input { tcp { port => 5516 type => afm } tcp { port => 5515 type => dns } tcp { port => 5514 type => pem } } filter { if [type] == 'pem' { kv { source => "message" field_split => "," } } if [type] == 'afm' { kv { source => "message" field_split => "," } geoip { source => "SourceIp" target => "SourceIp_geo" add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ] add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] } geoip { source => "DestinationIp" target => "DestinationIp_geo" add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ] add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ] } mutate { convert => [ "[geoip][coordinates]", "float"] } } if [type] == 'dns' { kv { source => "message" field_split => "," } } } output { if [type] == 'pem' { elasticsearch { hosts => ["10.1.1.5:9200"] index => "pem-%{+YYYY.MM.dd}" template_name => "pem" } } if [type] == 'afm' { elasticsearch { hosts => ["10.1.1.5:9200"] index => "afm-%{+YYYY.MM.dd}" template_name => "afm" } } if [type] == 'dns' { elasticsearch { hosts => ["10.1.1.5:9200"] index => "dns-%{+YYYY.MM.dd}" template_name => "dns" } } stdout {} }
Lab 1.5: Configure elasticsearch templates¶
Templates are used to create mappings between logstash and elasticsearch. Without the mappings elasticsearch will create automatic mappings however these will be elasticsearch’s best guess as to the field. In most cases this will default to text
. This means many of the fields such as IP address’s will be searchable but not able to be used in Visualisations.
Upload elasticsearch templates and mappings. There are multiple way this can be achieved. The most common ways are cURL and a REST based program such as POSTMAN. Feel free to use whichever method you are most comfortable with.
Note
RECOMMENDATION Use cURL for the uploading of the templates with json file. POSTMAN is useful for Elasticsearch management once the template are in place.
Task 1 Option1 - Install module templates in Elasticsearch via cURL¶
- Install Index Templates into Elastic Search for the required modules
cd<git clone directory>/json/
git clone directory from Lab 1
curl -XPUT http://localhost:9200/_template/pem?pretty -d @pem_mapping.json
curl -XPUT http://localhost:9200/_template/afm?pretty -d @afm_mapping.json
curl -XPUT http://localhost:9200/_template/dns?pretty -d @dns_mapping.json
Task 1 Option1 - Install module templates in Elasticsearch via POSTMAN¶
Import ELK Postman Collection and Environment
Click the ‘Import from Link’ tab. Paste the following URL into the text box and click ‘Import’
https://raw.githubusercontent.com/jarrodlucia/bigip_elk_server/develop/postman_collections/ELK Stack.postman_collection.json
You should now see a collection named ‘F5 ELK’ in your Postman Collections sidebar:
Import the Environment file by clicking ‘Import’ -> ‘Import from Link’ and pasting the following URL and clicking ‘Import’:
https://raw.githubusercontent.com/jarrodlucia/bigip_elk_server/develop/postman_collections/F5 ELK Env.postman_environment.json
- Click on GET Elasticsearch information, HIT SEND.
You should see cluster information regarding elasticsearch
- Click on GET Elasticsearch indices, HIT SEND.
You should see the current index’s and information regarding each index.
We will use this command to observe the creation of new indexes
- Click on GET Elasticsearch Template Searches, HIT SEND
You should see any current templates listed.
Note
New Install will NOT contain any templates showing {}
- Click on Create Template AFM + PEM + DNS Install all templates
Note
Create all templates from the POSTMAN collection
- Verify templates created and exist. Click on GET Elasticsearch Template Searches
Note
Look through the template JSON outputted by POSTMAN. Verify and check that the three templates created are present.
Lab 1.6: Send Logs to ELK Stack¶
Configure f5 for logging to new ELK stack
Check that data is arriving at ELK stack
Task 1 - Confirm BIG-IP is sending logs to ELK Stack¶
- Confirm via TMUI that the setup from Class 1 Lab 2.1
Update AFM Reporting to include what was not included in previous lab.
Note
- Make sure the correct port is allocated as per previous Logstash configuration
- Pool = tcp server:5514 - PEM
- Pool = tcp server:5515 - DNS
- Pool = tcp server:5516 - AFM/CGNAT
- Confirm Data is arrinving on server
sudo tcpdump -i eth1 port 5514
- Check that Data is arriving in the Index
curl ‘localhost:9200/_cat/indices?v’
or via POSTMAN
Lab 1.7: Create Index and Import Pre-Configured¶
Index’s are elasticsearchs way of storing documents
in shards
. When index’s are created the mapping templates we uploaded before are used to map each of the fields to a type. This is only done once when the index is created
Note
If mappings are changed are updates required the “index“ will have to be deleted, the template deleted and mapping changed and template added. At this point re-creating the index will remap to the new template
This Lab will focus on creating the index’s for each module based on logstash in Lab4
We will import the prepared f5 module json kibana searches / virtuals / and dashboards.
Task 1 - Create Kibana Index’s¶
- Configure Indexes in Kibana
Configure the first and default index
- index pattern =
pem-*
- select
@timestamps
- index pattern =
afm-*
- select
@timestamps
Follow PEM example above for AFM
- index pattern =
dns-*
- select
@timestamps
Task 2 - Import preconfigured Kibana json’s¶
Searches / Visualisation and Dashboards
- Import object data into Kibana
Import the JSON files in the following order:
- Searches
- Visualisations
- Dashboards
Searches
Visuals
Dashboards
Note
The JSON files have been placed in the IN_CASE_OF_EMERGENCY folder on the desktop
Module 2: Kibana and Visualisation¶
Coolness of Kibana interface
- Navigation
- Searching
- Creating Searches
- Creating Visualisations
- Creating Dashboards
Lab 2.1 – Kibana Interface & Search¶
Kibana is the interface to elasticsearch and makes visualisationa and dashboards available. It allows REST API calls for development of additional Cutomer interfaces.
This lab will look at the look and feel of the Kibana interface, and some key navigation hints and tips.
Task 1 - Kibana Interface Explantion¶
This task will focus on explation of the Kibana interface and navigating different aspects of the interface.
Try changing the following:
- Time Range
- Index
- Dashboards
Note
Take your time to explore each of the interface elements.
Task 2 - Searching Kibana¶
In this task we will use two example search types to see how Kibana uses elasticsearch. These example searches will be the following:
- Field Search
- Query Bar
Field Search Field searching is very useful in Kibana and can be used to see types of data and values that elasticsearch is indexing. To conduct field searching conduct the following:
- Click on a field
- Examine the expanded field, note the values that elasticsearch is indexing
- Click the add button.
- Notice the field is in the Selected Field section.
Note
Take time to explore multiple field add to Selected field and build up a set of interesting columns.
Query Bar This type if searching is searching all data fields not only Selected fields as we did previously.
Note
Take time to explore multiple field add to Selected field and use Query terms to see the results.
Lab 2.2 – Creating Kibana Usefulness¶
This Lab will focus on creation of three key components of Kibana for useful display of information, namely: - Searches - Visualisations - Dashboards
Task 2 - Creating Visualisations¶
Create and Save a 3 x visualisation based on the above search or the previous lab.
Examine existing Visualisations to understand how some of the different visualisation are constructed.
HOWTOs: Index¶
This section contains useful HOWTOs
HOWTO - how to do stuff¶
Twill put extra stuff into here
Task 1 – Update unknown index fields¶
At times new fields may appear in the index field based on software version or addiitonal logging from irules. It will be requried to update the index to make these fields usable.
Note
Notice the ? symbol next to the field.
Update by clicking on the refresh button
Note the increased change
Task 2 - Manual Index Changes¶
Index changes in json can be done manually if importing from another system.
- Create a new search or visualisation
- Export the new search json
- Open the json and copy the index id
- Open the json to be imported and paste the updated index id