Friday, December 22, 2017

Ubuntu Apt not able to fetch Debian packages or run an 'apt-get update' from behind a proxy server


Apt can take the system environment values of the http_proxy as well as https_proxy defined.


So to get over this set the proxies as the user environment variables or  system environment variables, which will be automatically used by apt actions

So set env variables related to the http_proxy or https_proxy these in either of the following as 

~/.profile
~/.bash_profile
~/.bashrc

or 

If this has to be set as the system Env variables, these can be set in the file as 

/etc/profile

or 

/etc/environment


or in a file in the 

/etc/profile.d 

like /etc/profile.d/more_env_variables.sh


The entries for the proxy are as 

export http_proxy=http://<Proxy Server FQDN or IP:<Port>

export https_proxy=http://<Proxy Server FQDN or IP>:<Port>

export no_proxy=<IP1|FQDN1| ...>


To be noted that as these files are executed at the time of login the user may have to logoff and logback in or can execute the Profile Variable files as 

. /bash_profile 

please note the [SPACE] after the dot '.' .

Alternative way is that if there is a particular proxy set for APT or  you just want the proxy server to be reached by APT then you can specify the proxy server in /etc/apt/apt.conf 

Or depending on the Ubuntu distribution version you can create a file under /etc/apt/apt.conf.d

to have the contents like this 

Acquire::http::proxy "http://<Proxy_server_FQDN_or_IP>:<PORT>
Acquire::https::proxy "http://<Proxy_server_FQDN_or_IP<:<PORT>

Replace the values above with the Proxy server IP and Port being used at your network.

An Alternative way if you do not want the proxy settings to be saved persistently just want to be available for the current session then, simply export the env variables as below


export http_proxy=http://<Proxy_server_FQDN_or_IP>:<PORT>
export https_proxy=http://<Proxy_server_FQDN_or_IP>:<PORT>



Dokcer-CE docker deamon unable to pull Images from docker hub from behind a proxy


This happens as the docker daemon tries to reach direct to the Docker Hub on the internet but the security requires that all traffic to the internet will be allowed only via PROXY server 



If not sure what is the configuration file for the docker service in terms of systemd, you can get the systemd docker configuration file using 'systemctl status docker' or 'systemctl status docker.service'.

The configuration file is highlighted as below

[root@rally docker.service.d]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-12-23 04:55:19 EST; 13min ago
     Docs: https://docs.docker.com
 Main PID: 22381 (dockerd)
   CGroup: /system.slice/docker.service
           ├─22381 /usr/bin/dockerd
           └─22387 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=...



Stop the docker service

systemctl stop docker



Edit the file  /usr/lib/systemd/system/docker.service



Make entries in the [service] section and put the 'Environment' variables for HTTP_PROXY and HTTPS_PROXY as per the proxy server type you have through which the docker daemon will try to reach out the Docker Hub to get the docker images.

If there are certain IPs that is needed that if ever docker daemon wants to reach to them not going through the proxy, put the entries of such IPs and FQDN in the 'Environment' variable as 'NO_PROXY'

For more on the related syntax please see the excerpt from the file /usr/lib/systemd/system/docker.service


[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID

Environment="HTTP_PROXY=http://<PROXY_Server_NAME_OR_IP>:<PROXY_PORT>"
Environment="HTTP_PROXY=http://<PROXY_Server_NAME_OR_IP>:<PROXY_PORT>"
Environment="NO_PROXY=<FQDN1|IPAddr1|IPAddr2|FQDN2| ...>"

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

Save and exit the file 

Issue a systemctl daemon-reload so as to acknowledge that the docker systemd file has changed on the disk

Restart/start  the docker service

systemctl daemon-reload
systemctl restart docker

Friday, November 3, 2017

Installation of Apache Ant 1.10.x and docker-ce 17.x on CentOS7 Jenkins Master and Slaves

Installation of Apache Ant 1.10.x and docker-ce 17.x on CentOS7 Jenkins Master and Slaves 



This Requires that JRE is installed. Please note that the Ant 1.1o.x requires that JRE 1.8 to be installed. 

This will ensure that the Jenkins master identically have the Docker and the Ant installations so that the builds that need ant and docker can be identically run from the Jenkins master as well as Jenkins Slave servers.

Sources:


  • Also JAVA_HOME is to be set to the path where JRE is installed.
  • Latest versions of Apache ant are available from http://ant.apache.org/bindownload.cgi
  • Ant 1.10.x requires the Java8 JRE.
  • we have installed the Java8 from oracle for Linux as we are running these on the CentOS 7 server.


Please note that this is an RPM based installed of 64 Bit 1.8.0 update 152 from here



[root@jenkinsmaster ~]# which java
/usr/bin/java


[root@jenkinsmaster ~]# java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)



Installation of Ant 1.10.x


Download the ANT binaries 

wget http://redrockdigimark.com/apachemirror/ant/binaries/apache-ant-1.10.1-bin.tar.gz


Extract the Binaries to a folder, here the same is extracted into /opt

tar zxvf apache-ant-1.10.1-bin.tar.gz -C /opt


See the extracted folder

[root@jenkinsmaster apache-ant-1.10.1]# ls -la /opt
total 8
drwxr-xr-x.  3 root root   30 Nov  2 16:15 .
dr-xr-xr-x. 17 root root 4096 Oct 15 22:55 ..
drwxr-xr-x   6 root root 4096 Feb  2  2017 apache-ant-1.10.1
[root@jenkinsmaster apache-ant-1.10.1]#

This is optional if the ANT_HOME and PATH as shown below are set. This creates the soft link for the ant command in the extracted files to the /usr/bin/ant

[root@jenkinsmaster bin]# ln -s /opt/apache-ant-1.10.1/bin/ant /usr/bin/ant

Run ant command without any option to ensure that ant command runs properly 



[root@jenkinsmaster bin]# ant
Buildfile: build.xml does not exist!
Build failed
[root@jenkinsmaster bin]#


See the version of ant installed


[root@jenkinsmaster bin]# ant -version
Apache Ant(TM) version 1.10.1 compiled on February 2 2017
[root@jenkinsmaster bin]#



Append these lines to the /etc/profile or BASH Profile or BASHRC  to set the ANT and JAVA environment variables



Here we had added these lines to the /etc/profile

export ANT_HOME=/opt/apache-ant-1.10.1
export PATH=$PATH:$ANT_HOME/bin
export JAVA_HOME=/usr/java/jdk1.8.0_152
export PATH=$PATH:$JAVA_HOME/bin


Ensure that these are working 

Environment Variables for JAVA_HOME, ANT_HOME, PATH


[root@jekninss1 ~]# env | grep -i -e ant -e java -e path
ANT_HOME=/opt/apache-ant-1.10.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/apache-an -1.10.1/bin:/usr/java/jdk1.8.0_152/bin
JAVA_HOME=/usr/java/jdk1.8.0_152



Version of Apache ant installed


[root@jekninss1 ~]# ant -version
Apache Ant(TM) version 1.10.1 compiled on February 2 2017
[root@jekninss1 ~]#



version of Java JRE installed 


[root@jekninss1 ~]# java -version
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
[root@jekninss1 ~]#

Follow the same instruction to setup Apache ant on the Jenkins slave also. This is to ensure that if ant can be run in the same manner out of Jenkins master or Jenkins Slave


Installation of Docker on the Jenkins Master server and the Jenkins Slave servers


Add the Docker CE Repo 


Reference: 


yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo


[root@jenkinsmaster yum.repos.d]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@jenkinsmaster yum.repos.d]#

Ensure that the docker repository shows up in the repolist


[root@jenkinsmaster yum.repos.d]# yum repolist
Loaded plugins: fastestmirror
docker-ce-stable                                         | 2.9 kB     00:00
docker-ce-stable/x86_64/primary_db                         | 9.3 kB   00:07
Loading mirror speeds from cached hostfile
 * base: mirror.nbrc.ac.in
 * epel: mirror2.totbb.net
 * extras: mirror.nbrc.ac.in
 * jpackage-generic: mirrors.dotsrc.org
 * jpackage-generic-updates: mirrors.dotsrc.org
 * updates: mirror.nbrc.ac.in
repo id                     repo name                                     status
base/7/x86_64               CentOS-7 - Base                                9,591
docker-ce-stable/x86_64     Docker CE Stable - x86_64                         10
*epel/x86_64                Extra Packages for Enterprise Linux 7 - x86_6 12,042
extras/7/x86_64             CentOS-7 - Extras                                280
jpackage-generic            JPackage (free), generic                       3,307
jpackage-generic-updates    JPackage (free), generic                          29
updates/7/x86_64            CentOS-7 - Updates                             1,052

[root@jenkinsmaster yum.repos.d]#

Install docker-ce


yum installation of docker-ce

yum -y install docker-ce


Enable the docker service

systemctl enable docker

[root@jenkinsmaster ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Start the docker service 

systemctl start docker


Ensure that docker is setup properly 

docker run 'hello-world'

This pulls the docker image of 'hello-world' and then runs 

[root@jenkinsmaster ~]# docker run 'hello-world'
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
5b0f327be733: Pull complete
Digest: sha256:175735360662078abd70dacb73c5518d5b3ae7c1ed069d22def5da57c3e917d6
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

[root@jenkinsmaster ~]#

Complete the same steps on the Jenkins Slave server(s) also.


Thursday, October 19, 2017

OpenStack Orchestration Heat to implement a glance cirros image heat template

OpenStack Orchestration Heat to implement a glance cirros image heat template 


The YAML file for the template is as filename '13cirros64bitimage.yml'
---
# for Newton release of OpenStack
#
heat_template_version: 2016-10-14

description: glance image

resources:
  glancecirros64bitimage:
    type: OS::Glance::Image
    properties:
      name: cirros_64_qcow2
      is_public: true
      container_format: bare
      disk_format: qcow2
      location: http://webserver.netx.sujit.com/images/cirros-0.3.5-x86_64-disk.img

outputs:
  glancecentos64bitimage_info:
    value: { get_attr: [ glancecirros64bitimage ] }


------



Running 'openstack stack create' for the stack implementation 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack stack create -t 13cirros64bitimage.yml cirros64bitglanceimage
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | e104d8f7-4a6a-4bb8-9494-d37b5d43b563 |
| stack_name          | cirros64bitglanceimage               |
| description         | glance image                         |
| creation_time       | 2017-10-19T22:17:32Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

---


Now confirm that the glance image is in place


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# glance image-list | grep cirros_64_qcow2
| 634b286c-39b3-4540-a9c0-9dde812d548d | cirros_64_qcow2 |
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

--

See the image 

the image is in place also active 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# glance image-show 634b286c-39b3-4540-a9c0-9dde812d548d
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2017-10-19T22:17:33Z                 |
| disk_format      | qcow2                                |
| id               | 634b286c-39b3-4540-a9c0-9dde812d548d |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros_64_qcow2                      |
| owner            | 49b25ce4022c492fa0c1eab4fc6c7419     |
| protected        | False                                |
| size             | 13267968                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2017-10-19T22:17:33Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

-------

Using OpenStack Orchestration heat stack to have the custom flavor in place OpenStack Newton

Using OpenStack Orchestration heat stack to have the custom flavor in place


The stack YAML file here helps to create a custom flavor 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# cat 12NovaFlavor.yml
---
# for Newton release of OpenStack
#
heat_template_version: 2016-10-14

description: Nova Flavor

resources:
  NovaFlavor1:
    type: OS::Nova::Flavor
    properties:
      is_public: true
      name: novaflavor1
      disk: 10
      ram: 4096
      swap: 1
      vcpus: 1
outputs:
  NovaFlavor_info:
    value: { get_attr: [NovaFlavor1]}
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


This stack as run will create the flavor of the name as 'novaflavor1' having the specifications of the flavor as 


  • disk size: 10GB
  • RAM size: 4GB
  • Swap size: 1GB 
  • Virt CPU count: 1


---

Run the stack



[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack stack create -t 12NovaFlavor.yml novaflavor1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | cb21ee12-dbee-49a3-8957-3a8ea444c320 |
| stack_name          | novaflavor1                          |
| description         | Nova Flavor                          |
| creation_time       | 2017-10-19T21:54:51Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


The above shows the heat stack is in implementation 


Confirming the creation of the nova flavor



[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack flavor list
+--------------------------------------+-------------+-------+------+-----------+-------+-----------+
| ID                                   | Name        |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-------------+-------+------+-----------+-------+-----------+
| 1                                    | m1.tiny     |   512 |    1 |         0 |     1 | True      |
| 1e77331d-56d1-4849-a260-dafc4b88cb76 | novaflavor1 |  4096 |   10 |         0 |     1 | True      |
| 2                                    | m1.small    |  2048 |   20 |         0 |     1 | True      |
| 3                                    | m1.medium   |  4096 |   40 |         0 |     2 | True      |
| 4                                    | m1.large    |  8192 |   80 |         0 |     4 | True      |
| 5                                    | m1.xlarge   | 16384 |  160 |         0 |     8 | True      |
+--------------------------------------+-------------+-------+------+-----------+-------+-----------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

OpenStack Orchestration Heat for Creation of a Public Key using Heat Stack OpenStack newton




OpenStack Orchestration Heat for Creation of a Public Key using Heat Stack 


--

The Stack YAML file for creation of a Public key is as 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# cat 11privatekey.yml
---
# for Newton release of OpenStack
#
heat_template_version: 2016-10-14

description: private key

resources:
  privatekey1:
    type: OS::Nova::KeyPair
    properties:
      name: keypair1
      save_private_key: true

outputs:
  privatekey1_info:
    value: { get_attr: [privatekey1]}
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


------

Create the stack
The Stack creates a public key with the name of 'Keypair1'

---

openstack stack create -t 11privatekey.yml privatekeypair1

[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack stack create -t 11privatekey.yml privatekeypair1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | 692e74db-2a99-43fe-ad3b-5ff169d982bc |
| stack_name          | privatekeypair1                      |
| description         | private key                          |
| creation_time       | 2017-10-19T21:49:37Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


--
The stack is being implemented
Confirm the same as keypair1 is created 

--

openstack keypair show keypair1

[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack keypair show keypair1

+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| created_at  | 2017-10-19T21:49:38.000000                      |
| deleted     | False                                           |
| deleted_at  | None                                            |
| fingerprint | 7a:f3:a5:1e:36:86:e7:25:f6:99:3c:b0:75:e7:3f:e1 |
| id          | 2                                               |
| name        | keypair1                                        |
| updated_at  | None                                            |
| user_id     | 1db28c8710b04be7968935f5edcf0971                |
+-------------+-------------------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

---

OpenStack Orchestration Heat to implement a Security Group along with the rules.

Have a stack to define a security Group as well as to create the rules in that group.


---

The heat stack YAML file to add the rules to the security group is 



[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# cat 09securitygroup.yml
---
# for Newton release of OpenStack
#
heat_template_version: 2016-10-14

description: put a security group

resources:
  securitygroup:
    type: OS::Neutron::SecurityGroup
    properties:
      name: securitygroup80-443-22-ICMP
      #rules:
      #type: list
      rules:
        - { direction: ingress, ethertype: IPv4, protocol: icmp, remote_ip_prefix: 0.0.0.0/0 }
        - { direction: ingress, ethertype: IPv4, port_range_min: 22, port_range_max: 22, protocol: tcp, remote_ip_prefix: 0.0.0.0/0 }
        - { direction: ingress, ethertype: IPv4, port_range_min: 80, port_range_max: 80, protocol: tcp, remote_ip_prefix: 0.0.0.0/0 }
        - { direction: ingress, ethertype: IPv4, port_range_min: 443, port_range_max: 443, protocol: tcp, remote_ip_prefix: 0.0.0.0/0 }

outputs:
  subnet_info:
    value: { get_attr: [securitygroup]}
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#

---


  • The above YAML file template as called using the OpenStack orchestration as a template, will add the following rules to the security group 'securitygroup80-443-22-ICMP'. 
  • This YAML file in the process of implementing the stack also creates the security group 'securitygroup80-443-22-ICMP' first.



ingress from 0.0.0.0/0 for PING 
ingress from 0.0.0.0/0 for SSH TCP 22
ingress from 0.0.0.0/0 for HTTP TCP 80
ingress from 0.0.0.0/09 for HTTPS TCP 443 

---


Implement the stack



[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# openstack stack create -t 09securitygroup.yml securitygroup80-443-22-ICMP
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | ec487918-3130-4a5c-9302-4711119c2cd9 |
| stack_name          | securitygroup80-443-22-ICMP          |
| description         | put a security group                 |
| creation_time       | 2017-10-19T21:40:39Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


The heat implementing the stack_id

----------



Confirm the security group and rules


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# neutron security-group-list | grep ICMP
| dc2a8841-cd94-4349-8643-942f2b2596b7 | securitygroup80-443-22-ICMP | egress, IPv4                                                         |
[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]#


Confirm the 


[root@newtonallinone HeatOrchestrationTemplates(keystone_admin)]# neutron security-group-show securitygroup80-443-22-ICMP
+----------------------+--------------------------------------------------------------------+
| Field                | Value                                                              |
+----------------------+--------------------------------------------------------------------+
| created_at           | 2017-10-19T21:40:40Z                                               |
| description          |                                                                    |
| id                   | dc2a8841-cd94-4349-8643-942f2b2596b7                               |
| name                 | securitygroup80-443-22-ICMP                                        |
| project_id           | 49b25ce4022c492fa0c1eab4fc6c7419                                   |
| revision_number      | 5                                                                  |
| security_group_rules | {                                                                  |
|                      |      "remote_group_id": null,                                      |
|                      |      "direction": "ingress",                                       |
|                      |      "protocol": "tcp",                                            |
|                      |      "description": "",                                            |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": "0.0.0.0/0",                              |
|                      |      "port_range_max": 22,                                         |
|                      |      "updated_at": "2017-10-19T21:40:40Z",                         |
|                      |      "security_group_id": "dc2a8841-cd94-4349-8643-942f2b2596b7",  |
|                      |      "port_range_min": 22,                                         |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "49b25ce4022c492fa0c1eab4fc6c7419",              |
|                      |      "created_at": "2017-10-19T21:40:40Z",                         |
|                      |      "project_id": "49b25ce4022c492fa0c1eab4fc6c7419",             |
|                      |      "id": "267fc284-9657-41c1-b221-bb737d50e709"                  |
|                      | }                                                                  |
|                      | {                                                                  |
|                      |      "remote_group_id": null,                                      |
|                      |      "direction": "ingress",                                       |
|                      |      "protocol": "icmp",                                           |
|                      |      "description": "",                                            |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": "0.0.0.0/0",                              |
|                      |      "port_range_max": null,                                       |
|                      |      "updated_at": "2017-10-19T21:40:40Z",                         |
|                      |      "security_group_id": "dc2a8841-cd94-4349-8643-942f2b2596b7",  |
|                      |      "port_range_min": null,                                       |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "49b25ce4022c492fa0c1eab4fc6c7419",              |
|                      |      "created_at": "2017-10-19T21:40:40Z",                         |
|                      |      "project_id": "49b25ce4022c492fa0c1eab4fc6c7419",             |
|                      |      "id": "64b97157-fa46-4bfe-84cb-da7ba9c8e76b"                  |
|                      | }                                                                  |
|                      | {                                                                  |
|                      |      "remote_group_id": null,                                      |
|                      |      "direction": "ingress",                                       |
|                      |      "protocol": "tcp",                                            |
|                      |      "description": "",                                            |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": "0.0.0.0/0",                              |
|                      |      "port_range_max": 443,                                        |
|                      |      "updated_at": "2017-10-19T21:40:41Z",                         |
|                      |      "security_group_id": "dc2a8841-cd94-4349-8643-942f2b2596b7",  |
|                      |      "port_range_min": 443,                                        |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "49b25ce4022c492fa0c1eab4fc6c7419",              |
|                      |      "created_at": "2017-10-19T21:40:41Z",                         |
|                      |      "project_id": "49b25ce4022c492fa0c1eab4fc6c7419",             |
|                      |      "id": "87d969f5-d315-4ab3-ba28-fe1de7f0988b"                  |
|                      | }                                                                  |
|                      | {                                                                  |
|                      |      "remote_group_id": null,                                      |
|                      |      "direction": "egress",                                        |
|                      |      "protocol": null,                                             |
|                      |      "description": null,                                          |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": null,                                     |
|                      |      "port_range_max": null,                                       |
|                      |      "updated_at": "2017-10-19T21:40:40Z",                         |
|                      |      "security_group_id": "dc2a8841-cd94-4349-8643-942f2b2596b7",  |
|                      |      "port_range_min": null,                                       |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "49b25ce4022c492fa0c1eab4fc6c7419",              |
|                      |      "created_at": "2017-10-19T21:40:40Z",                         |
|                      |      "project_id": "49b25ce4022c492fa0c1eab4fc6c7419",             |
|                      |      "id": "b1903a87-8ae4-40d0-b6d9-75276eb2e4cf"                  |
|                      | }                                                                  |
|                      | {                                                                  |
|                      |      "remote_group_id": null,                                      |
|                      |      "direction": "egress",                                        |
|                      |      "protocol": null,                                             |
|                      |      "description": null,                                          |
|                      |      "ethertype": "IPv6",                                          |
|                      |      "remote_ip_prefix": null,                                     |
|                      |      "port_range_max": null,                                       |
|                      |      "updated_at": "2017-10-19T21:40:40Z",                         |
|                      |      "security_group_id": "dc2a8841-cd94-4349-8643-942f2b2596b7",  |
|                      |      "port_range_min": null,                                       |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "49b25ce4022c492fa0c1eab4fc6c7419",              |
|                      |      "created_at": "2017-10-19T21:40:40Z",                         |
|                      |      "project_id": "49b25ce4022c492fa0c1eab4fc6c7419",             |
|                      |      "id": "b845f9ed-4aed-46f4-8480-de6560f781b8"                  |
|                      | }                                                                  |
|                      | {                                                                  |
|                      |      "remote_group_id": null,                                      |
|                      |      "direction": "ingress",                                       |
|                      |      "protocol": "tcp",                                            |
|                      |      "description": "",                                            |
|                      |      "ethertype": "IPv4",                                          |
|                      |      "remote_ip_prefix": "0.0.0.0/0",                              |
|                      |      "port_range_max": 80,                                         |
|                      |      "updated_at": "2017-10-19T21:40:40Z",                         |
|                      |      "security_group_id": "dc2a8841-cd94-4349-8643-942f2b2596b7",  |
|                      |      "port_range_min": 80,                                         |
|                      |      "revision_number": 1,                                         |
|                      |      "tenant_id": "49b25ce4022c492fa0c1eab4fc6c7419",              |
|                      |      "created_at": "2017-10-19T21:40:40Z",                         |
|                      |      "project_id": "49b25ce4022c492fa0c1eab4fc6c7419",             |
|                      |      "id": "f5633437-f842-470d-997c-e5279e31f0eb"                  |
|                      | }                                                                  |
| tenant_id            | 49b25ce4022c492fa0c1eab4fc6c7419                                   |
| updated_at           | 2017-10-19T21:40:41Z                                               |
+----------------------+--------------------------------------------------------------------+

pip3 install ovirt-engine-sdk-python 4.4.1 - import ovirtsdk4 errors with pycurl link-time SSL backend was nss and not openssl

In [2]: import ovirtsdk4 --------------------------------------------------------------------------- ImportError                         ...