Elasticsearch, updating your mapping

Elasticsearch, updating your mapping

A very common issue, while working with elasticsearch and refining the data model of a project, is to face some error because the fields are generic text or fail to aggregate because fielddata is not enabled:

Fielddata is disabled on text fields by default. Set fielddata=true on [your_field_name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory.

First of all, DO NOT enable fielddata, think about that twice and say “NO”. There is always a better way, they cost a lot. Read this for details: https://www.elastic.co/guide/en/elasticsearch/reference/current/fielddata.html

Retrieve the current mapping.

This is the first step, you have to understand what is wrong and how you intend to fix it. use

curl -XGET "http://host:9200/indexname/_mapping"

to retrieve the current mapping of your index.

Create or update the mapping for your index.

First of all, have a look at this URL: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html

It will help to clarify which types you can use / specify in your mapping. It is more likely that you will need to add ip, dates and long, especially if you’re going to deal with nested data.

By generally speaking the template you’re going to apply will look like below

{
  "order": 1,
  "settings": {
    ... this is where you specify any settings ...
  },
  "index_patterns": ["yourpattern"]
  "mappings": {
    .. this can be exactly what you get from the _mapping API of the index ...

You may want to do that because some fields are wrongly mapped with a wrong type (e.g. number are currently mapped as text or IP addresses are mapped as text).

Once you have done, you can just invoke the _template API and set/update:

curl -H 'Content-Type: application/json' http://localhost:9200/_template/yourpattern -d@/path/to/yourpattern.template.json

the template that will be used for the next index that will be created, so if you specify

"index_patterns": ["yourpattern-*"]

any new index matching that pattern will inherit the mapping (e.g. yourpattern-2019.01.15)

Update an existing index

The above template won’t affect the already create index, so you might need to “migrate” the existing data to be mapped according to the new template. You can do that by telling elasticsearch to create a new index with the _reindex API.

Below example assume your existing data are stored with index yourpattern-1-2019.01.15. Before proceeding, make sure nothing is writing your index (it could be a good idea to set a version number to the index so you can route the traffic to a new index by just telling the clients, like logstash, to use a different number), then you can run the below curl

curl -XPOST "http://127.0.0.1:9200/_reindex?wait_for_completion=false" -H 'Content-Type: application/json' -d'
{
  "source": {
    "index": "yourpattern-1-2019.01.15"
  },
  "dest": {
    "index": "yourpattern-1.1-2019.01.15"
  }
}'

By specifying ?wait_for_completion=false elasticsearch will return a token that can be used to monitor the process with the _tasks API

curl -XGET "http://127.0.0.1:19200/_tasks/MM-HD_-cRXeOF6QESzidSQ:1155556"

The Reindex API is an incredibly powerful tool, I recommend to read this for more details: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html

Once you have done, you can safely remove the previous index with the wrongly mapped data.

Elasticsearch, some note about shards and replicas

Elasticsearch, some note about shards and replicas

An elasticsearch index is composed of one or more shards. Shards can be primary or replica, primary shards are the first involved in a write, replica are created or updated after the primary.

Shards.

Shards are how elasticsearch scales horizontally and distributes load among the nodes of a cluster. A shard is a lucene index, it can hold up to Integer.MAX_VALUE-128 documents (2,147,483,519). You must tune your index to make sure you won’t have shards greater than 20/30GB (or you will have out of memory issues very soon). Changing the number of shards requires to build/re-build a new index, you must set this while creating the index (or through a proper mapping configuration),

see https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-params.html

{ 
  "settings":
   {
    "index.number_of_shards": 10
   }
}

See template API for details: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

Replicas.

They are important to scale reads and lower the pressure (replica are usually involved in the reads as primary target) and make you’re index resilient to failure. A replica can become a primary shard if needed.

The replicas can be set on-the-fly

curl -X PUT "localhost:9200/indexname/_settings" -H 'Content-Type: application/json' -d' {     "index" : {         "number_of_replicas" : 2     } } '

CentOS 7: Disabling OOMKiller for a process

site unreliability

In the latest version of the proc filesystem the OOMKiller has had some adjustments.  The valid range is now -1000 to +1000; previously it was -16 to +15 with a special score of -17 to outright disable it.  It also now uses /proc/<pid>/oom_score_adj instead of /proc/<pid>/oom_adj.  You can read the finer details here.

Given that, systemd now includes OOMScoreAdjust specifically for altering this.  To fully disable OOMKiller on a service simply add OOMScoreAdjust=-1000 directly underneath a [Service] definition, as follows.

...
[Service]
OOMScoreAdjust=-1000
...

This score can be adjusted if you want to ensure the parent PID lives, but children processes can be safely reaped by setting it to something like -999, then if “/bin/parent”, has “/bin/parent –memory-hungry-child,” it will be killed first.

If you have a third-party daemon (like Datadog, used in this example below) which manages itself and uses a sysvinit script you can still calm…

View original post 173 more words

How to fix not working targetcli/target on CentOS 7.1.1503

If you’re making practice on CentOS (as I do) for your RHCE/RHCSA exam you might be installing and working on a couple of VMs running CentOS 7.1 (even though 7.0 should be the reference).

You may have noticed that on that release targetcli tool and its service (target.service) are not working because of a python runtime error:

Installed:
 targetcli.noarch 0:2.1.fb41-3.el7

Dependency Installed:
 pyparsing.noarch 0:1.5.6-9.el7 python-configshell.noarch 1:1.1.fb18-1.el7 python-kmod.x86_64 0:0.9-4.el7
 python-rtslib.noarch 0:2.1.fb57-3.el7 python-urwid.x86_64 0:1.1.1-3.el7

Complete!
[root@server ~]# targetcli
Traceback (most recent call last):
 File "/bin/targetcli", line 24, in <module>
 from targetcli import UIRoot
 File "/usr/lib/python2.7/site-packages/targetcli/__init__.py", line 18, in <module>
 from .ui_root import UIRoot
 File "/usr/lib/python2.7/site-packages/targetcli/ui_root.py", line 27, in <module>
 from rtslib_fb import RTSRoot
 File "/usr/lib/python2.7/site-packages/rtslib_fb/__init__.py", line 24, in <module>
 from .root import RTSRoot
 File "/usr/lib/python2.7/site-packages/rtslib_fb/root.py", line 26, in <module>
 from .target import Target
 File "/usr/lib/python2.7/site-packages/rtslib_fb/target.py", line 24, in <module>
 from six.moves import range
ImportError: cannot import name range

You will also get an error on target.service:

[root@server ~]# systemctl start target.service
Job for target.service failed. See 'systemctl status target.service' and 'journalctl -xn' for details.
[root@server ~]# journalctl -xn
-- Logs begin at Sun 2016-08-28 22:35:23 CEST, end at Sun 2016-08-28 22:44:29 CEST. --
Aug 28 22:44:29 server.yari.local target[2208]: from six.moves import range
Aug 28 22:44:29 server.yari.local target[2208]: ImportError: cannot import name range
Aug 28 22:44:29 server.yari.local systemd[1]: target.service: main process exited, code=exited, status=1/FAILURE
Aug 28 22:44:29 server.yari.local systemd[1]: Failed to start Restore LIO kernel target configuration.
-- Subject: Unit target.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit target.service has failed.
--
-- The result is failed.
Aug 28 22:44:29 server.yari.local systemd[1]: Unit target.service entered failed state.

By following the trace from the python error it seems clear that root cause is in the six.moves package that is provided by python-six rpm package. On RHEL 7.1 they have a newer version (and of course both targetcli/target work properly on RHEL 7.1, as you can test on AWS if you can still use a free subscription).

Best way to fix that issue is to simply upgrade that package to the newer version available from the repository:

yum update -y python-six

and you will see that target/targetcli will start working like on RHEL 7.1

Dependencies Resolved

==========================================================================================================================================
 Package Arch Version Repository Size
==========================================================================================================================================
Updating:
 python-six noarch 1.9.0-2.el7 base 29 k

Transaction Summary
==========================================================================================================================================
Upgrade 1 Package

Total download size: 29 k
Downloading packages:
No Presto metadata available for base
python-six-1.9.0-2.el7.noarch.rpm | 29 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Updating : python-six-1.9.0-2.el7.noarch 1/2
 Cleanup : python-six-1.3.0-4.el7.noarch 2/2
 Verifying : python-six-1.9.0-2.el7.noarch 1/2
 Verifying : python-six-1.3.0-4.el7.noarch 2/2

Updated:
 python-six.noarch 0:1.9.0-2.el7

Complete!
[root@server ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ......................................................................................................................... [...]
 o- backstores .............................................................................................................. [...]
 | o- block .................................................................................................. [Storage Objects: 0]
 | o- fileio ................................................................................................. [Storage Objects: 0]
 | o- pscsi .................................................................................................. [Storage Objects: 0]
 | o- ramdisk ................................................................................................ [Storage Objects: 0]
 o- iscsi ............................................................................................................ [Targets: 0]
 o- loopback ......................................................................................................... [Targets: 0]
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

[root@server ~]# systemctl start target
[root@server ~]# systemctl status target
target.service - Restore LIO kernel target configuration
 Loaded: loaded (/usr/lib/systemd/system/target.service; disabled)
 Active: active (exited) since Sun 2016-08-28 22:50:38 CEST; 5s ago
 Process: 2367 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 2367 (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/target.service

Aug 28 22:50:38 server.yari.local systemd[1]: Starting Restore LIO kernel target configuration...
Aug 28 22:50:38 server.yari.local systemd[1]: Started Restore LIO kernel target configuration.

Enjoy!

RH7LAB – Kerberized NFS with IPA Server

RH7LAB – Kerberized NFS with IPA Server

Currently I’m working on my RHCE certification and I’m facing some issue in settings up a lab in order to properly test the Kerberized NFS share. Since this has been reported as one of the most complex topic to test I hope someone will find this post helpful for his/her RHCE preparation.

I have deployed my lab on 3 t2.micro virtual EC2 instances on Amazon while using my free annual subscription. That t2.micro configuration offers a 1core/1GB memory/10GB disk featured host that will serve you (almost) well while studying for your RHCE (and RHCSA) exam.

I will be working with the AMI RedHat 7.2 that is available directly from AWS images.

 

The 3 hosts are configured like below:

  • ipa1a.example.com, the host where you will install the ipa server that will provide LDAP/Kerberos to the lab
  • backend1a.example.com, the host where you will install and configure your NFS server
  • frontend1a.example.com, the host where you will install and configure your NFS client

cloudcraft - RH7LAB - Kerberized NFS (1)

Please note: hostname configuration on AWS/EC2 requires user data and cloud-init tool (this is not part of a standard RH deployment nor RHCE objectives). Also, IPA setup isn’t part of the RHCE objectives, but since you will need it in order to make some practice with Kerberos and IDM it is worth to know something about how it works…

On AWS I’m using a single jump-box configured with a public dynamically assigned IP and accepting SSH connection only from my home network (check AWS Security Groups for more details). From that jumpbox I can reach my RH7 lab (from its /etc/hosts):

 172.31.1.10 ipa1a.example.com ipa1a
 172.31.1.20 backend1a.example.com backend1a
 172.31.1.30 frontend1a.example.com frontend1a

1. Prepare the hosts.

The default template for the RedHat 7 hosts on AWS provides a very minimal set of features that won’t fit more likely your needs for the RHCE / RHCSA practice. You will need to make some change in order to have a properly configured host. A major issue will be about the memory. By default, your virtual host has no swap space configured, while installing IPA you will have issue because of the “Out of Memory” issue you will trigger with the ipa-server-install script, so I do recommend to add at least 1GB volume and configure it as SWAP partition.

Once your named will be installed on ipa1a.example.com it is also required that all your hosts in your lab will use ipa1a.example.com as DNS and prevent the DHCP client to update the nameserver entry in /etc/resolv.conf. They are both tasks you should be able to manage, since they are in the objectives of the RHCSA / RHCE (you will find more details later on this post).

If you have trouble in configuring the hostname please have a look at how cloud-init and user-data can work on AWS (as far as I can see normal hostname configuration procedure as it is documented by RH doesn’t work properly on AWS and won’t survive a reboot of the instance), you may want to add in your “User Data” something like:

#cloud-config
 hostname: frontend1a.example.com

in order to properly set the desired hostname on boot.
Please note: you need to stop your EC2 instance in order to update your User Data.

Also, I do recommend to install on every host the “Server with GUI” group, since it provides an almost complete environment for your RHCSA/RHCE practice

sudo yum groupinstall -y "Server with GUI"

2. Install the required packages for IPA server (ipa1a.example.com).

You will need to install the ipa server packages:

sudo yum install -y ipa-server bind nds-ldap ipa-server-dns

Please note: you will read somewhere that ipa-server-dns is not required and you can leverage on the /etc/hosts. I recommend to install it and configure your other hosts in order to use ipa.example.com as DNS, it will make your life a lot easier during the clients configuration.

3. Configure your IPA server (ipa1a.example.com).

I will use example.com domain and realm. First of all, verify your host is resolved on the external IP, not on 127.0.0.1, your /etc/hosts should look like below:

[root@ipa1a ~]# getent hosts
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost6.localdomain6 localhost6
172.31.1.10 ipa1a.example.com ipa1a

It is important that your hostname doesn’t resolve on 127.0.0.1 or ::1.

Then proceed with the configuration of the ipa server that for me will be ipa1a.example.com

sudo -i
ipa-server-install --hostname=ipa1a.example.com -n example.com -r EXAMPLE.COM -p password -a password -U --no-ntp --setup-dns --forwarder=172.31.0.2

For me, “172.31.0.2” is the IP of the available resolver, you may have a different IP, bear in mind DNS will be configured as forwarder in order to resolve external host, so it has to be a valid DNS or once your hosts will be configured in order to work with ipa1a as primary DNS they will fail in finding external hostnames for update and other stuff.

It will take some time to complete the configuration and deployment of the ipaserver and all its components. Installation procedure will be complete when you will receive the below message:

==============================================================================
Setup complete

Next steps:
 1. You must make sure these network ports are open:
 TCP Ports:
 * 80, 443: HTTP/HTTPS
 * 389, 636: LDAP/LDAPS
 * 88, 464: kerberos
 * 53: bind
 UDP Ports:
 * 88, 464: kerberos
 * 53: bind
 2. You can now obtain a kerberos ticket using the command: 'kinit admin'
 This ticket will allow you to use the IPA tools (e.g., ipa user-add)
 and the web user interface.
 3. Kerberos requires time synchronization between clients
 and servers for correct operation. You should consider enabling ntpd.

Be sure to back up the CA certificates stored in /root/cacert.p12
These files are required to create replicas. The password for these
files is the Directory Manager password

Next step will be to configure your firewall as suggested above:

[root@ipa1a ~]# sudo firewall-cmd --permanent --add-service=http
success
[root@ipa1a ~]# sudo firewall-cmd --permanent --add-service=https
success
[root@ipa1a ~]# sudo firewall-cmd --permanent --add-service=ldap
success
[root@ipa1a ~]# sudo firewall-cmd --permanent --add-service=ldaps
success
[root@ipa1a ~]# sudo firewall-cmd --permanent --add-service=kerberos
success
[root@ipa1a ~]# sudo firewall-cmd --permanent --add-service=dns
success
[root@ipa1a ~]# sudo firewall-cmd --reload
success

Now you can verify ipa server has been installed properly by performing a quick check like obtaining a token from Kerberos in order to access the IPA tools (password is ‘password’)

[root@ipa1a ~]# sudo -i ; kinit admin
Password for admin@EXAMPLE.COM:

Then look for the admin profile in the LDAP

[root@ipa1a ~]# ipa user-find admin
--------------
1 user matched
--------------
 User login: admin
 Last name: Administrator
 Home directory: /home/admin
 Login shell: /bin/bash
 UID: 702400000
 GID: 702400000
 Account disabled: False
 Password: True
 Kerberos keys available: True
----------------------------
Number of entries returned 1
----------------------------

Command “kinit admin”creates a kerberos token you can use in order to access as “admin” you management tools (ipa commands). Password is the one we have specified with “-a password”, so it will be ‘password’ unless you have specified something else while running ipa-server-install command. Every time you open a shell and plan to access ipa tools don’t forget to get a token with “kinit admin”.

4. Configure DNS, hosts and services on IPA.

In order to properly work and interact with Kerberos it is required that every host and service is provisioned. So we’ll need to configure backend1a and frontend1a and enable the NFS service access for both.Since your ipa1a instance is configured in order to use dhcp (and on a cloud environment I do recommend to work like this) just verify you don’t let dhcp client update your /etc/resolv.conf with the nameserver the dhcp server provides (this is part of what a RHCSA is supposed to know, see “man -K PEERDNS” for details).

At first, update your DNS in order to resolve the involved hosts, add the DNS record for backend1a (ip 172.31.1.20), the command will prompt for the record type (A) and the IP Address (172.31.1.20 or whatever you have defined for your lab)

[root@ipa1a ~]# ipa dnsrecord-add example.com backend1a
Please choose a type of DNS resource record to be added
The most common types for this type of zone are: A, AAAA

DNS resource record type: A
A IP Address: 172.31.1.20
 Record name: backend1a
 A record: 172.31.1.20

Then add the DNS record for frontend1a (ip 172.31.1.30), the command will prompt for the record type (A) and the IP Address (172.31.1.30).

[root@ipa1a ~]# ipa dnsrecord-add example.com frontend1a
Please choose a type of DNS resource record to be added
The most common types for this type of zone are: A, AAAA

DNS resource record type: A
A IP Address: 172.31.1.30
 Record name: frontend1a
 A record: 172.31.1.30

Now create the profile for the hosts with ipa host-add <host>:

[root@ipa1a ~]# ipa host-add backend1a.example.com
----------------------------------
Added host "backend1a.example.com"
----------------------------------
 Host name: backend1a.example.com
 Principal name: host/backend1a.example.com@EXAMPLE.COM
 Password: False
 Keytab: False
 Managed by: backend1a.example.com
[root@ipa1a ~]# ipa host-add frontend1a.example.com
-----------------------------------
Added host "frontend1a.example.com"
-----------------------------------
 Host name: frontend1a.example.com
 Principal name: host/frontend1a.example.com@EXAMPLE.COM
 Password: False
 Keytab: False
 Managed by: frontend1a.example.com

And finally create the NFS service for both with ipa service-add <service/host>

[root@ipa1a ~]# ipa service-add nfs/backend1a.example.com
-----------------------------------------------------
Added service "nfs/backend1a.example.com@EXAMPLE.COM"
-----------------------------------------------------
 Principal: nfs/backend1a.example.com@EXAMPLE.COM
 Managed by: backend1a.example.com
[root@ipa1a ~]# ipa service-add nfs/frontend1a.example.com
------------------------------------------------------
Added service "nfs/frontend1a.example.com@EXAMPLE.COM"
------------------------------------------------------
 Principal: nfs/frontend1a.example.com@EXAMPLE.COM
 Managed by: frontend1a.example.com

4. Configure your hosts in order to work with IPA server.

Now that your server is configured it’s time to bind the other hosts to Kerberos and let them use its authentication methods for you lab and tests.

Add 1GB of swap to your host (if you haven’t done that already), then configure your host in order to use use ipa1a.example.com (for me 172.31.1.10) as dns resolver by updating /etc/sysconfig/network-scripts/ifcft-eth0 (or any other file) by disabling the PEERDNS option and adding DNS1 and DOMAIN entries, e.g.

DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="no"
IPV6INIT="no"
DOMAIN=example.com
DNS1=172.31.1.10

Reboot your hosts and verify everything is working fine.

Perform the above steps on all the hosts you want to use for your RedHat 7 Lab (more likely frontend1a and backend1a).

Then verify name resolving is properly working (e.g. with dig ipa1a.example.com):

[root@backend1a ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search example.com
nameserver 172.31.1.10


[root@backend1a ~]# dig ipa1a.example.com

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> ipa1a.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 779
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ipa1a.example.com. IN A

;; ANSWER SECTION:
ipa1a.example.com. 1200 IN A 172.31.1.10

;; AUTHORITY SECTION:
example.com. 86400 IN NS ipa1a.example.com.

;; Query time: 1 msec
;; SERVER: 172.31.1.10#53(172.31.1.10)
;; WHEN: Sat Aug 20 11:06:25 EDT 2016
;; MSG SIZE rcvd: 76

Now you can proceed by installing the ipa-client packages.

sudo yum install -y ipa-client ipa-admintools

Then proced with the configuration by executing

sudo -i
ipa-client-install --hostname=<hostname> --enable-dns-update

replace <hostname> with frontend1a.example.com or backend1a.example.com. It will be requested a user authorized to enroll computers, specify “admin” and then when prompted for the password type “password”

[root@frontend1a ~]# ipa-client-install --hostname=frontend1a.example.com --enable-dns-update
WARNING: ntpd time&date synchronization service will not be configured as
conflicting service (chronyd) is enabled
Use --force-ntpd option to disable it and force configuration of ntpd

Discovery was successful!
Client hostname: frontend1a.example.com
Realm: EXAMPLE.COM
DNS Domain: example.com
IPA Server: ipa1a.example.com
BaseDN: dc=example,dc=com

Continue to configure the system with these values? [no]: yes
Skipping synchronizing time with NTP server.
User authorized to enroll computers: admin
Password for admin@EXAMPLE.COM:
Successfully retrieved CA cert
 Subject: CN=Certificate Authority,O=EXAMPLE.COM
 Issuer: CN=Certificate Authority,O=EXAMPLE.COM
 Valid From: Sat Aug 20 14:15:41 2016 UTC
 Valid Until: Wed Aug 20 14:15:41 2036 UTC

Enrolled in IPA realm EXAMPLE.COM
Created /etc/ipa/default.conf
New SSSD config will be created
Configured sudoers in /etc/nsswitch.conf
Configured /etc/sssd/sssd.conf
Configured /etc/krb5.conf for IPA realm EXAMPLE.COM
trying https://ipa1a.example.com/ipa/json
Forwarding 'ping' to json server 'https://ipa1a.example.com/ipa/json'
Forwarding 'ca_is_enabled' to json server 'https://ipa1a.example.com/ipa/json'
Systemwide CA database updated.
Added CA certificates to the default NSS database.
Missing reverse record(s) for address(es): 172.31.1.30.
Adding SSH public key from /etc/ssh/ssh_host_rsa_key.pub
Adding SSH public key from /etc/ssh/ssh_host_ecdsa_key.pub
Adding SSH public key from /etc/ssh/ssh_host_ed25519_key.pub
Forwarding 'host_mod' to json server 'https://ipa1a.example.com/ipa/json'
SSSD enabled
Configured /etc/openldap/ldap.conf
Configured /etc/ssh/ssh_config
Configured /etc/ssh/sshd_config
Configuring example.com as NIS domain.
Client configuration complete.

Now you can perform some check and verify the communications between the hosts and the server, from the host you can use “kinit admin” and access the ipa tools in the same way you can do on the server, so you can lookup and check services, e.g.

[root@frontend1a ~]# kinit admin
Password for admin@EXAMPLE.COM:
[root@frontend1a ~]# ipa service-find nfs
------------------
2 services matched
------------------
 Principal: nfs/backend1a.example.com@EXAMPLE.COM
 Keytab: False
 Managed by: backend1a.example.com

Principal: nfs/frontend1a.example.com@EXAMPLE.COM
 Keytab: False
 Managed by: frontend1a.example.com
----------------------------
Number of entries returned 2
----------------------------

5. Retrieve the keytab you will use for your Kerberized NFS exercises.

In this lab you can leverage on ipa tool and update your local copy of the /etc/krb5.keytab with the required principals. After the configuration, your hosts will have the principals for the host, but they will miss the ones for the NFS service, you will need to update your local /etc/krb5.keytab by running on every host the command below, on backend1a:

[root@backend1a ~]# ipa-getkeytab -s ipa1a.example.com -p nfs/backend1a.example.com -k /etc/nfs.keytab
Keytab successfully retrieved and stored in: /etc/krb5.keytab

while on frontend1a

[root@frontend1a ~]# ipa-getkeytab -s ipa1a.example.com -p nfs/frontend1a.example.com -k /etc/krb5.keytab
Keytab successfully retrieved and stored in: /etc/krb5.keytab

(command will update the existing keytab)

It is possible to check the content of your /etc/krb5.keytab file with the command klist -k /etc/krb5.keytab that will provide the output below:

[root@frontend1a ~]# klist -k /etc/krb5.keytab
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
 1 host/frontend1a.example.com@EXAMPLE.COM
 1 host/frontend1a.example.com@EXAMPLE.COM
 1 host/frontend1a.example.com@EXAMPLE.COM
 1 host/frontend1a.example.com@EXAMPLE.COM
 1 nfs/frontend1a.example.com@EXAMPLE.COM
 1 nfs/frontend1a.example.com@EXAMPLE.COM
 1 nfs/frontend1a.example.com@EXAMPLE.COM
 1 nfs/frontend1a.example.com@EXAMPLE.COM

6. Test your Kerberized NFS.

Now you can give NFS with Kerberos a try and verify if your LAB works.

On backend1a.example.com configure the NFS service in order to export a krb5p secured filesystem, e.g.

on backend1a.example.com:

sudo -i
mkdir /krbnfs
chown nfsnobody /krbnfs
echo "/krbnfs frontend1a.example.com(sec=krb5p,rw)" > /etc/exports.d/krbnfs.exports
systemctl restart nfs-server
firewall-cmd --permanent --add-server=nfs
firewall-cmd --reload

on frontend1a.example.com:

sudo -i
systemctl restart nfs-secure
mount -t nfs4 -o sec=krb5p backend1a.example.com:/krbnfs /mnt/
mount|grep krbnfs
backend1a.example.com:/krbnfs on /mnt type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=krb5p,clientaddr=172.31.1.30,local_lock=none,addr=172.31.1.20)

Congratulations! Enjoy your kerberized RH7 Lab on AWS 🙂

PS: on older version of RH (7.0, not sure about 7.1) if was also required to enable and use nfs-secure-server on the NFS server while on latest builds (like the RH 7.2 we have on AWS) it is not required since GSSProxy is taking care of the “secure” jobs (see https://fedorahosted.org/gss-proxy/wiki/NFS for details).

PPS: this lab will work well also in order to test the LDAP based Authentication that is part of the RHCSA objectives

7. How to troubleshoot IPA/Kerberos.

If you have trouble and your Kerberized NFS exports don’t work, you may find helpful to check on ipa1a.example.com the /var/log/krb5kcd.log and see if odd requests / host / service are passing. It could help to spot misconfiguration / typos or other stuff. Good luck!

 

 

 

 

 

 

 

 

 

 

Migrate your self-managed wordpress blog to wordpress.com

 

I have been running this blog on a self-managed virtual host for a while (~2 years), then, two weeks ago, in order to save some money and lower the effort for the maintenance, I have decided to migrate it to wordpress.com where it is right now.

Theorically speaking wordpress provides various import / export tools (for many different sources), but I have immediately found an issue about the way media resources are restored. Well I have found two issues…

  1. I can’t say if it depends by a wrong configuration in my previous blob setup or because this is the supposed way to go, I have noticed that the images are linked with absolute URLs into the wordpress posts, and I have found no way to make them dynamic.
  2. Export “all contents” doesn’t export “all contents” despite that is what the import/export tool advertises on wordpress.

For issue 1 I have made some tests in order to identify the right absolute path to use, then I have applied the good old “search / replace” method in the exported xml file before uploading it to the new blog. It has been enough.

For issue 2 I had to make some more checks and I have found that it is required to explicitely select to export media files from the wordpress tool. Once you have exported them and imported them to the new blog, you will see wordpress.com grabbing all your media attachments from the old blog.

It took a couple of hours to get the point of both issues (and many attempts with their following manual cleanup, unfortunately there is no way to reset wordpress.com blob, or at least I haven’t found a way to do that withour cancelling the existing subscriptions on the blob itself – I have bought the domain mapping to yari.net…).

tl;dr

If you are facing any issue because the posts you have imported from your old wordpress blog to the new one are missing their shiny images, you must consider to proceed like below

  1. Check the absolute path your old posts use in order to link and load images, you have to look for HTML tags like <img title=”wp-1456610569051″ class=”alignnone size-full” alt=”image” src=”https://yaridotnet.files.wordpress.com/2016/02/wp-1456610569051.jpeg?w=768” originalw=”768″ scale=”2″> – That first path of the URL “https://yaridotnet.files.wordpress.com/&#8221; is what you’re going to use in order to fix your export XML file
  2. Export all your content from your old blog from the Tools / Exports area of wordpress administration dashboart.
  3. Made on the post of your old blog the same check you have done on point 1 and verify what’s the absolute URL your old blog uses
  4. Edit that exported XML file and replace the leading part of those URLs, e.g. mines were something like: crop-images (Everything up to the date has to be replaced)
  5. Upload your data again and you will be fine.
  6. Export from the old blog only the media (even if Tools / Export / All claims to export everything media won’t be loaded by the new blog, you need to proceed with an explicit media export / import)
  7. Import that second file into the new blog
  8. Have a look at the access log of your old wordpress blog (if you can) and you will see that after some times wordpress.com will start downlading your old media files to their new and right location.