Demande.tn

🔒
❌
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hiermajor.io

Import RPM repository GPG keys from other keyservers temporarily

Par Major Hayden

Keys, but not gpg keysI’ve been working through some patches to OpenStack-Ansible lately to optimize how we configure yum repositories in our deployments. During that work, I ran into some issues where pgp.mit.edu was returning 500 errors for some requests to retrieve GPG keys.

Ansible was returning this error:

curl: (22) The requested URL returned error: 502 Proxy Error
error: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x61E8806C: import read failed(2)

How does the rpm command know which keyserver to use? Let’s use the --showrc argument to show how it is configured:

$ rpm --showrc | grep hkp
-14: _hkp_keyserver http://pgp.mit.edu
-14: _hkp_keyserver_query   %{_hkp_keyserver}:11371/pks/lookup?op=get&search=0x

How do we change this value temporarily to test a GPG key retrieval from a different server? There’s an argument for that as well: --define:

$ rpm --help | grep define
  -D, --define='MACRO EXPR'        define MACRO with value EXPR

We can assemble that on the command line to set a different keyserver temporarily:

# rpm -vv --define="%_hkp_keyserver http://pool.sks-keyservers.net" --import 0x61E8806C
-- SNIP --
D: adding "63deac79abe7ad80e147d671c2ac5bd1c8b3576e" to Sha1header index.
-- SNIP --

Let’s verify that our new key is in place:

# rpm -qa | grep -i gpg-pubkey-61E8806C
gpg-pubkey-61e8806c-5581df56
# rpm -qi gpg-pubkey-61e8806c-5581df56
Name        : gpg-pubkey
Version     : 61e8806c
Release     : 5581df56
Architecture: (none)
Install Date: Wed 20 Sep 2017 10:17:11 AM CDT
Group       : Public Keys
Size        : 0
License     : pubkey
Signature   : (none)
Source RPM  : (none)
Build Date  : Wed 17 Jun 2015 03:57:58 PM CDT
Build Host  : localhost
Relocations : (not relocatable)
Packager    : CentOS Virtualization SIG (http://wiki.centos.org/SpecialInterestGroup/Virtualization) <security@centos.org>
Summary     : gpg(CentOS Virtualization SIG (http://wiki.centos.org/SpecialInterestGroup/Virtualization) <security@centos.org>)
Description :
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: rpm-4.11.3 (NSS-3)

mQENBFWB31YBCAC4dFmTzBDOcq4R1RbvQXLkyYfF+yXcsMA5kwZy7kjxnFqBoNPv
aAjFm3e5huTw2BMZW0viLGJrHZGnsXsE5iNmzom2UgCtrvcG2f65OFGlC1HZ3ajA
8ZIfdgNQkPpor61xqBCLzIsp55A7YuPNDvatk/+MqGdNv8Ug7iVmhQvI0p1bbaZR
0GuavmC5EZ/+mDlZ2kHIQOUoInHqLJaX7iw46iLRUnvJ1vATOzTnKidoFapjhzIt
i4ZSIRaalyJ4sT+oX4CoRzerNnUtIe2k9Hw6cEu4YKGCO7nnuXjMKz7Nz5GgP2Ou
zIA/fcOmQkSGcn7FoXybWJ8DqBExvkJuDljPABEBAAG0bENlbnRPUyBWaXJ0dWFs
aXphdGlvbiBTSUcgKGh0dHA6Ly93aWtpLmNlbnRvcy5vcmcvU3BlY2lhbEludGVy
ZXN0R3JvdXAvVmlydHVhbGl6YXRpb24pIDxzZWN1cml0eUBjZW50b3Mub3JnPokB
OQQTAQIAIwUCVYHfVgIbAwcLCQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJEHrr
voJh6IBsRd0H/A62i5CqfftuySOCE95xMxZRw8+voWO84QS9zYvDEnzcEQpNnHyo
FNZTpKOghIDtETWxzpY2ThLixcZOTubT+6hUL1n+cuLDVMu4OVXBPoUkRy56defc
qkWR+UVwQitmlq1ngzwmqVZaB8Hf/mFZiB3B3Jr4dvVgWXRv58jcXFOPb8DdUoAc
S3u/FLvri92lCaXu08p8YSpFOfT5T55kFICeneqETNYS2E3iKLipHFOLh7EWGM5b
Wsr7o0r+KltI4Ehy/TjvNX16fa/t9p5pUs8rKyG8SZndxJCsk0MW55G9HFvQ0FmP
A6vX9WQmbP+ml7jsUxtEJ6MOGJ39jmaUvPc=
=ZzP+
-----END PGP PUBLIC KEY BLOCK-----

Success!

If you want to override the value permanently, create a ~/.rpmmacros file and add the following line to it:

%_hkp_keyserver http://pool.sks-keyservers.net

Photo credit: Wikipedia

The post Import RPM repository GPG keys from other keyservers temporarily appeared first on major.io.

Thunderbird changes fonts in some messages, not all

Par Major Hayden

Thunderbird is a great choice for a mail client on Linux systems if you prefer a GUI, but I had some problems with fonts in the most recent releases. The monospace font used for plain text messages was difficult to read.

I opened Edit > Preferences > Display and clicked Advanced to the right of Fonts & Colors. The default font for monospace text was “Monospace”, and that one isn’t terribly attractive. I chose “DejaVu Sans Mono” instead, and closed the dialog boxes.

The fonts in monospace messages didn’t change. I quit Thunderbird, opened it again, and still didn’t see a change. The strange part is that a small portion of my monospaced messages were opening with the updated font while the majority were not.

I went back into Thunderbird’s preferences and took another look:

thunderbird fonts and colors panel

Everything was set as I expected. I started with some Google searches and stumbled upon a Mozilla Bug: Changing monospace font doesn’t affect all messages. One of the participants in the bug mentioned that any emails received without ISO-8859-1 encoding would be unaffected since Thunderbird allows you set fonts for each encoding.

I clicked the dropdown where “Latin” was selected and I selected “Other Writing Systems”. After changing the monospace font there, the changes went into effect for all of my monospaced messages!

The post Thunderbird changes fonts in some messages, not all appeared first on major.io.

Troubleshooting CyberPower PowerPanel issues in Linux

Par Major Hayden

Power linesI have a CyberPower BRG1350AVRLCD at home and I’ve just connected it to a new device. However, the pwrstat command doesn’t retrieve any useful data on the new system:

# pwrstat -status

The UPS information shows as following:


    Current UPS status:
        State........................ Normal
        Power Supply by.............. Utility Power
        Last Power Event............. None

I disconnected the USB cable and ran pwrstat again. Same output. I disconnected power from the UPS itself and ran pwrstat again. Same output. This can’t be right.

Checking the basics

A quick look at dmesg output shows that the UPS is connected and the kernel recognizes it:

[   65.661489] usb 3-1: new full-speed USB device number 7 using xhci_hcd
[   65.830769] usb 3-1: New USB device found, idVendor=0764, idProduct=0501
[   65.830771] usb 3-1: New USB device strings: Mfr=3, Product=1, SerialNumber=2
[   65.830772] usb 3-1: Product: BRG1350AVRLCD
[   65.830773] usb 3-1: Manufacturer: CPS
[   65.830773] usb 3-1: SerialNumber: xxxxxxxxx
[   65.837801] hid-generic 0003:0764:0501.0004: hiddev0,hidraw0: USB HID v1.10 Device [CPS BRG1350AVRLCD] on usb-0000:00:14.0-1/input0

I checked the /var/log/pwrstatd.log file to see if there were any errors:

2017/07/25 12:01:17 PM  Daemon startups.
2017/07/25 12:01:24 PM  Communication is established.
2017/07/25 12:01:27 PM  Low Battery capacity is restored.
2017/07/25 12:05:19 PM  Daemon stops its service.
2017/07/25 12:05:19 PM  Daemon startups.
2017/07/25 12:05:19 PM  Communication is established.
2017/07/25 12:05:22 PM  Low Battery capacity is restored.
2017/07/25 12:06:27 PM  Daemon stops its service.

The pwrstatd daemon can see the device and communicate with it. This is unusual.

Digging into the daemon

If the daemon can truly see the UPS, then what is it talking to? I used lsof to examine what the pwrstatd daemon is doing:

# lsof -p 3975
COMMAND   PID USER   FD   TYPE             DEVICE SIZE/OFF      NODE NAME
pwrstatd 3975 root  cwd    DIR               8,68      224        96 /
pwrstatd 3975 root  rtd    DIR               8,68      224        96 /
pwrstatd 3975 root  txt    REG               8,68   224175 134439879 /usr/sbin/pwrstatd
pwrstatd 3975 root  mem    REG               8,68  2163104 134218946 /usr/lib64/libc-2.25.so
pwrstatd 3975 root  mem    REG               8,68  1226368 134218952 /usr/lib64/libm-2.25.so
pwrstatd 3975 root  mem    REG               8,68    19496 134218950 /usr/lib64/libdl-2.25.so
pwrstatd 3975 root  mem    REG               8,68   187552 134218939 /usr/lib64/ld-2.25.so
pwrstatd 3975 root    0r   CHR                1,3      0t0      1028 /dev/null
pwrstatd 3975 root    1u  unix 0xffff9e395e137400      0t0     37320 type=STREAM
pwrstatd 3975 root    2u  unix 0xffff9e395e137400      0t0     37320 type=STREAM
pwrstatd 3975 root    3u  unix 0xffff9e392f0c0c00      0t0     39485 /var/pwrstatd.ipc type=STREAM
pwrstatd 3975 root    4u   CHR             180,96      0t0     50282 /dev/ttyS1

Wait a minute. The last line of the lsof output shows that pwrstatd is talking to /dev/ttyS1, but the device is supposed to be a hiddev device over USB. If you remember, we had this line in dmesg when the UPS was plugged in:

hid-generic 0003:0764:0501.0004: hiddev0,hidraw0: USB HID v1.10 Device [CPS BRG1350AVRLCD] on usb-0000:00:14.0-1/input0

Things are beginning to make more sense now. I have a USB-to-serial device that allows my server to talk to the console port on my Cisco switch:

[   80.389533] usb 3-1: new full-speed USB device number 9 using xhci_hcd
[   80.558025] usb 3-1: New USB device found, idVendor=067b, idProduct=2303
[   80.558027] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[   80.558028] usb 3-1: Product: USB-Serial Controller D
[   80.558029] usb 3-1: Manufacturer: Prolific Technology Inc. 
[   80.558308] pl2303 3-1:1.0: pl2303 converter detected
[   80.559937] usb 3-1: pl2303 converter now attached to ttyUSB0

It appears that pwrstatd is trying to talk to my Cisco switch (through the USB-to-serial adapter) rather than my UPS! I’m sure they could have a great conversation together, but it’s hardly productive.

Fixing it

The /etc/pwrstatd.conf has a relevant section:

# The pwrstatd accepts four types of device node which includes the 'ttyS',
# 'ttyUSB', 'hiddev', and 'libusb' for communication with UPS. The pwrstatd
# defaults to enumerate all acceptable device nodes and pick up to use an
# available device node automatically. But this may cause a disturbance to the
# device node which is occupied by other software. Therefore, you can restrict
# this enumerate behave by using allowed-device-nodes option. You can assign
# the single device node path or multiple device node paths divided by a
# semicolon at this option. All groups of 'ttyS', 'ttyUSB', 'hiddev', or
# 'libusb' device node are enumerated without a suffix number assignment.
# Note, the 'libusb' does not support suffix number only.
#
# For example: restrict to use ttyS1, ttyS2 and hiddev1 device nodes at /dev
# path only.
# allowed-device-nodes = /dev/ttyS1;/dev/ttyS2;/dev/hiddev1
#
# For example: restrict to use ttyS and ttyUSB two groups of device node at
# /dev,/dev/usb, and /dev/usb/hid paths(includes ttyS0 to ttySN and ttyUSB0 to
# ttyUSBN, N is number).
# allowed-device-nodes = ttyS;ttyUSB
#
# For example: restrict to use hiddev group of device node at /dev,/dev/usb,
# and /dev/usb/hid paths(includes hiddev0 to hiddevN, N is number).
# allowed-device-nodes = hiddev
#
# For example: restrict to use libusb device.
# allowed-device-nodes = libusb
allowed-device-nodes =

We need to explicitly tell pwrstatd to talk to the UPS on /dev/hid/hiddev0:

allowed-device-nodes = /dev/usb/hiddev0

Let’s restart the pwrstatd daemon and see what we get:

# systemctl restart pwrstatd
# pwrstat -status

The UPS information shows as following:

    Properties:
        Model Name................... BRG1350AVRLCD
        Firmware Number..............
        Rating Voltage............... 120 V
        Rating Power................. 810 Watt(1350 VA)

    Current UPS status:
        State........................ Normal
        Power Supply by.............. Utility Power
        Utility Voltage.............. 121 V
        Output Voltage............... 121 V
        Battery Capacity............. 100 %
        Remaining Runtime............ 133 min.
        Load......................... 72 Watt(9 %)
        Line Interaction............. None
        Test Result.................. Unknown
        Last Power Event............. None

Success!

Photo credit: Wikipedia

The post Troubleshooting CyberPower PowerPanel issues in Linux appeared first on major.io.

Apply the STIG to even more operating systems with ansible-hardening

Par Major Hayden

FortressTons of improvements made their way into the ansible-hardening role in preparation for the OpenStack Pike release next month. The role has a new name, new documentation and extra tests.

The role uses the Security Technical Implementation Guide (STIG) produced by the Defense Information Systems Agency (DISA) and applies the guidelines to Linux hosts using Ansible. Every control is configurable via simple Ansible variables and each control is thoroughly documented.

These controls are now applied to an even wider variety of Linux distributions:

  • CentOS 7
  • Debian 8 Jessie (new for Pike)
  • Fedora 25 (new for Pike)
  • openSUSE Leap 42.2+ (new for Pike)
  • Red Hat Enterprise Linux 7
  • SUSE Linux Enterprise 12 (new for Pike)
  • Ubuntu 14.04 Trusty
  • Ubuntu 16.04 Xenial

Any patches to the ansible-hardening role are tested against all of these operating systems (except RHEL 7 and SUSE Linux Enterprise). Support for openSUSE testing landed this week.

Work is underway to put the finishing touches on the master branch before the Pike release and we need your help!

If you have any of these operating systems deployed, please test the role on your systems! This is pre-release software, so it’s best to apply it only to a new server. Read the “Getting Started” documentation to get started with ansible-galaxy or git.

Photo credit: Wikipedia

The post Apply the STIG to even more operating systems with ansible-hardening appeared first on major.io.

Customize LDAP autocompletion format in Thunderbird

Par Major Hayden

MailboxThunderbird can connect to an LDAP server and autocomplete email addresses as you type, but it needs some adjustment for some LDAP servers. One of the LDAP servers that I use regularly returns email addresses like this in the thunderbird interface:

username <firstname.lastname@domain.tld>

The email address looks fine, but I’d much rather have the person’s full name instead of the username. Here’s what I’m looking for:

Firstname Lastname <firstname.lastname@domain.tld>

In older Thunderbird versions, setting ldap_2.servers.SERVER_NAME.autoComplete.nameFormat to displayName was enough. However, this option isn’t used in recent versions of Thunderbird.

Digging in

After a fair amount of searching the Thunderbird source code with awk, I found a mention of DisplayName in nsAbLDAPAutoCompleteSearch.js that looked promising:

// Create a minimal map just for the display name and primary email.
      this._attributes =
        Components.classes["@mozilla.org/addressbook/ldap-attribute-map;1"]
                  .createInstance(Components.interfaces.nsIAbLDAPAttributeMap);
      this._attributes.setAttributeList("DisplayName",
        this._book.attributeMap.getAttributeList("DisplayName", {}), true);
      this._attributes.setAttributeList("PrimaryEmail",
        this._book.attributeMap.getAttributeList("PrimaryEmail", {}), true);
    }

Something is unusual here. The LDAP field is called displayName, but this attribute is called DisplayName (note the capitalization of the D). Just before that line, I see a lookup in an attributes map of some sort. There may be a configuration option that is called DisplayName.

In Thunderbird, I selected Edit > Preferences. I clicked the Advanced tab and then Config Editor. A quick search for DisplayName revealed an interesting configuration option:

ldap_2.servers.default.attrmap.DisplayName: cn,commonname

Fixing it

That’s the problem! This needs to map to displayName in my case, and not cn,commonname (which returns a user’s username). There are two different ways to fix this:

# Change it for just one LDAP server
ldap_2.servers.SERVER_NAME.attrmap.DisplayName: displayName
# Change it for all LDAP servers by default (careful)
ldap_2.servers.default.attrmap.DisplayName: displayName

After making the change, quit Thunderbird and relaunch it. Compose a new email and start typing in the email address field. The user’s first and last name should appear!

The post Customize LDAP autocompletion format in Thunderbird appeared first on major.io.

Old role, new name: ansible-hardening

Par Major Hayden

The interest in the openstack-ansible-security role has taken off faster than I expected, and one piece of constant feedback I received was around the name of the role. Some users were unsure if they needed to use the role in an OpenStack cloud or if the OpenStack-Ansible project was required.

ansible-hardening logoThe role works everywhere — OpenStack cloud or not. I started a mailing list thread on the topic and we eventually settled on a new name: ansible-hardening! The updated documentation is already available.

The old openstack-ansible-security role is being retired and it will not receive any additional updates. Moving to the new role is easy:

  1. Install ansible-hardening with ansible-galaxy (or git clone)
  2. Change your playbooks to use the ansible-hardening role

There’s no need to change any variable names or tags — they are all kept the same in the new role.

As always, if you have questions or comments about the role, drop by #openstack-ansible on Freenode IRC or open a bug in Launchpad.

The post Old role, new name: ansible-hardening appeared first on major.io.

Enable AppArmor on a Debian Jessie cloud image

Par Major Hayden

Armor from the middle ages makes sense when you're talking about AppArmor, right?I merged some initial Debian support into the openstack-ansible-security role and ran into an issue enabling AppArmor. The apparmor service failed to start and I found this output in the system journal:

kernel: AppArmor: AppArmor disabled by boot time parameter

Digging in

That was unexpected. I was using the Debian jessie cloud image and it uses extlinux as the bootloader. The file didn’t reference AppArmor at all:

# cat /boot/extlinux/extlinux.conf 
default linux
timeout 1
label linux
kernel boot/vmlinuz-3.16.0-4-amd64
append initrd=boot/initrd.img-3.16.0-4-amd64 root=/dev/vda1 console=tty0 console=ttyS0,115200 ro quiet

I learned that AppArmor is disabled by default in Debian unless you explicitly enable it. In contrast, SELinux is enabled unless you turn it off. To make matters worse, Debian’s cloud image doesn’t have any facilities or scripts to automatically update the extlinux configuration file when new kernels are installed.

Making a repeatable fix

My two goals here were to:

  1. Ensure AppArmor is enabled on the next boot
  2. Ensure that AppArmor remains enabled when new kernels are installed

The first step is to install grub2:

apt-get -y install grub2

During the installation, a package configuration window will appear that asks about where grub should be installed. I selected /dev/vda from the list and waited for apt to finish the package installation.

The next step is to edit /etc/default/grub and add in the AppArmor configuration. Adjust the GRUB_CMDLINE_LINUX_DEFAULT line to look like the one below:

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet apparmor=1 security=apparmor"
GRUB_CMDLINE_LINUX=""

Ensure that the required AppArmor packages are installed:

apt-get -y install apparmor apparmor-profiles apparmor-utils

Enable the AppArmor service upon reboot:

systemctl enable apparmor

Run update-grub and reboot. After the reboot, run apparmor_status and you should see lots of AppArmor profiles loaded:

# apparmor_status 
apparmor module is loaded.
38 profiles are loaded.
3 profiles are in enforce mode.
   /usr/lib/chromium-browser/chromium-browser//browser_java
   /usr/lib/chromium-browser/chromium-browser//browser_openjdk
   /usr/lib/chromium-browser/chromium-browser//sanitized_helper
35 profiles are in complain mode.
   /sbin/klogd
   /sbin/syslog-ng
   /sbin/syslogd
   /usr/lib/chromium-browser/chromium-browser
   /usr/lib/chromium-browser/chromium-browser//chromium_browser_sandbox
   /usr/lib/chromium-browser/chromium-browser//lsb_release
   /usr/lib/chromium-browser/chromium-browser//xdgsettings
   /usr/lib/dovecot/anvil
   /usr/lib/dovecot/auth
   /usr/lib/dovecot/config
   /usr/lib/dovecot/deliver
   /usr/lib/dovecot/dict
   /usr/lib/dovecot/dovecot-auth
   /usr/lib/dovecot/dovecot-lda
   /usr/lib/dovecot/imap
   /usr/lib/dovecot/imap-login
   /usr/lib/dovecot/lmtp
   /usr/lib/dovecot/log
   /usr/lib/dovecot/managesieve
   /usr/lib/dovecot/managesieve-login
   /usr/lib/dovecot/pop3
   /usr/lib/dovecot/pop3-login
   /usr/lib/dovecot/ssl-params
   /usr/sbin/avahi-daemon
   /usr/sbin/dnsmasq
   /usr/sbin/dovecot
   /usr/sbin/identd
   /usr/sbin/mdnsd
   /usr/sbin/nmbd
   /usr/sbin/nscd
   /usr/sbin/smbd
   /usr/sbin/smbldap-useradd
   /usr/sbin/smbldap-useradd///etc/init.d/nscd
   /usr/{sbin/traceroute,bin/traceroute.db}
   /{usr/,}bin/ping
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Final thoughts

I’m still unsure about why AppArmor is disabled by default. There aren’t that many profiles shipped by default (38 on my freshly installed jessie system versus 417 SELinux policies in Fedora 25) and many of them affect services that wouldn’t cause significant disruptions on the system.

There is a discussion that ended last year around how to automate the AppArmor enablement process when the AppArmor packages are installed. This would be a great first step to make the process easier, but it would probably make more sense to take the step of enabling it by default.

Photo credit: Max Pixel

The post Enable AppArmor on a Debian Jessie cloud image appeared first on major.io.

Fixing OpenStack noVNC consoles that ignore keyboard input

Par Major Hayden

Televideo consoleI opened up a noVNC console to a virtual machine today in my OpenStack cloud but found that the console wouldn’t take keyboard input. The Send Ctrl-Alt-Del button in the top right of the window worked just fine, but I couldn’t type anywhere in the console. This happened on an Ocata OpenStack cloud deployed with OpenStack-Ansible on CentOS 7.

Test the network path

The network path to the console is a little deep for this deployment, but here’s a quick explanation:

  • My laptop connects to HAProxy
  • HAProxy sends the traffic to the nova-novncproxy service
  • nova-novncproxy connects to the correct VNC port on the right hypervisor

If all of that works, I get a working console! I knew the network path was set up correctly because I could see the console in my browser.

My next troubleshooting step was to dump network traffic with tcpdump on the hypervisor itself. I dumped the traffic on port 5900 (which was the VNC port for this particular instance) and watched the output. Whenever I wiggled the mouse over the noVNC console in my browser, I saw a flurry of network traffic. The same thing happened if I punched lots of keys on the keyboard. At this point, it was clear that the keyboard input was making it to the hypervisor, but it wasn’t being handled correctly.

Test the console

Next, I opened up virt-manager, connected to the hypervisor, and opened a connection to the instance. The keyboard input worked fine there. I opened up remmina and connected via plain old VNC. The keyboard input worked fine there, too!

Investigate in the virtual machine

The system journal in the virtual machine had some interesting output:

kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.
kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0).
kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.
kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0).
kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.
kernel: atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0).
kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.
kernel: atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0).
kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.
kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0).
kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.
kernel: atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0).
kernel: atkbd serio0: Use 'setkeycodes 00 <keycode>' to make it known.

It seems like my keyboard input was being lost in translation — literally. I have a US layout keyboard (Thinkpad X1 Carbon) and the virtual machine was configured with the en-us keymap:

# virsh dumpxml 4 | grep vnc
    <graphics type='vnc' port='5900' autoport='yes' listen='192.168.250.41' keymap='en-us'>

A thorough Googling session revealed that it is not recommended to set a keymap for virtual machines in libvirt in most situations. I set the nova_console_keymap variable in /etc/openstack_deploy/user_variables.yml to an empty string:

nova_console_keymap: ''

I redeployed the nova service using the OpenStack-Ansible playbooks:

openstack-ansible os-nova-install.yml

Once that was done, I powered off the virtual machine and powered it back on. (This is needed to ensure that the libvirt changes go into effect for the virtual machine.)

Great success! The keyboard was working in the noVNC console once again!

Photo credit: Wikipedia

The post Fixing OpenStack noVNC consoles that ignore keyboard input appeared first on major.io.

OpenStack Summit Boston 2017 Recap

Par Major Hayden

Bunker Hill Monument Boston OpenStack SummitThe OpenStack Summit wrapped up today in Boston and it was a great week! There were plenty of informative breakouts and some interesting keynotes.

Keynotes

Beth Cohen shared some of the work that Verizon has done with software-defined WAN on customer-premises equipment (CPE). She showed a demo of how customers could easily provision virtual network hardware, such as firewalls or intrusion detection systems, without waiting for hardware or cabling changes. I’m less familiar with the world of telcos, so I found this really interesting.

Daniela Rus gave an amazing keynote about the democratization of robotics. She showed videos of tiny robots doing some amazing things, including robots which could be swallowed. Those robots could help children who swallow dangerous things (like batteries) without painful surgery.

The big surprise on the second day was the Q&A with Edward Snowden. At first, I was skeptical about it being a publicity stunt, but it turned out to be a really good conversation about the value of open source.

My favorite keynote was from Patrick Weeks of GE. He talked about their IT transformation goals and how they selected OpenStack to solve them. They chose a solution from Rackspace and their engineers love it!

Breakouts

Here are some links to my favorite breakouts:

OpenStack-Ansible

Although I couldn’t make it to all of the OpenStack-Ansible sessions, we had a great turnout for the ones I attended! Every seat was taken during the developer onboarding session and we had some helpful comments from new contributors.

Andy McCrae leads the OpenStack-Ansible onboarding session

Andy McCrae leads the OpenStack-Ansible onboarding session

My talks

The week was a long one for me! I shared two full-length talks, helped with a lightning talk, and joined a panel. Here are some quick links to the videos and slides:

  • Grow Your Community: Inspire an Impostor
  • Securing OpenStack Cloud and Beyond with Ansible
  • The Open Open Open Open Cloud
  • OpenStack Security Team Update

Photo credit: Luciot

The post OpenStack Summit Boston 2017 Recap appeared first on major.io.

OpenStack-Ansible networking on CentOS 7 with systemd-networkd

Par Major Hayden

Ethernet switchAlthough OpenStack-Ansible doesn’t fully support CentOS 7 yet, the support is almost ready. I have a four node Ocata cloud deployed on CentOS 7, but I decided to change things around a bit and use systemd-networkd instead of NetworkManager or the old rc scripts.

This post will explain how to configure the network for an OpenStack-Ansible cloud on CentOS 7 with systemd-networkd.

Each one of my OpenStack hosts has four network interfaces and each one has a specific task:

  • enp2s0 – regular network interface, carries inter-host LAN traffic
  • enp3s0 – carries br-mgmt bridge for LXC container communication
  • enp4s0 – carries br-vlan bridge for VM public network connectivity
  • enp5s0 – carries br-vxlan bridge for VM private network connectivity

Adjusting services

First off, we need to get systemd-networkd and systemd-resolved ready to take over networking:

systemctl disable network
systemctl disable NetworkManager
systemctl enable systemd-networkd
systemctl enable systemd-resolved
systemctl start systemd-resolved
rm -f /etc/resolv.conf
ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

LAN interface

My enp2s0 network interface carries traffic between hosts and handles regular internal LAN traffic.

[Match]
Name=enp2s0

[Network]
Address=192.168.250.21/24
Gateway=192.168.250.1
DNS=192.168.250.1
DNS=8.8.8.8
DNS=8.8.4.4
IPForward=yes

This one is quite simple, but the rest get a little more complicated.

Management bridge

The management bridge (br-mgmt) carries traffic between LXC containers. We start by creating the bridge device itself:

[NetDev]
Name=br-mgmt
Kind=bridge

Now we configure the network on the bridge (I use OpenStack-Ansible’s defaults here):

[Match]
Name=br-mgmt

[Network]
Address=172.29.236.21/22

I run the management network on VLAN 10, so I need a network device and network configuration for the VLAN as well. This step adds the br-mgmt bridge to the VLAN 10 interface:

[NetDev]
Name=vlan10
Kind=vlan

[VLAN]
Id=10

[Match]
Name=vlan10

[Network]
Bridge=br-mgmt

Finally, we add the VLAN 10 interface to enp3s0 to tie it all together:

[Match]
Name=enp3s0

[Network]
VLAN=vlan10

Public instance connectivity

My router offers up a few different VLANs for OpenStack instances to use for their public networks. We start by creating a br-vlan network device and its configuration:

[NetDev]
Name=br-vlan
Kind=bridge

[Match]
Name=br-vlan

[Network]
DHCP=no

We can add this bridge onto the enp4s0 physical interface:

[Match]
Name=enp4s0

[Network]
Bridge=br-vlan

VXLAN private instance connectivity

This step is similar to the previous one. We start by defining our br-vxlan bridge:

[NetDev]
Name=br-vxlan
Kind=bridge

[Match]
Name=br-vxlan

[Network]
Address=172.29.240.21/22

My VXLAN traffic runs over VLAN 11, so we need to define that VLAN interface:

[NetDev]
Name=vlan11
Kind=vlan

[VLAN]
Id=11

[Match]
Name=vlan11

[Network]
Bridge=br-vxlan

We can hook this VLAN interface into the enp5s0 interface now:

[Match]
Name=enp5s0

[Network]
VLAN=vlan11

Checking our work

The cleanest way to apply all of these configurations is to reboot. The Adjusting services step from the beginning of this post will ensure that systemd-networkd and systemd-resolved come up after a reboot.

Run networkctl to get a current status of your network interfaces:

# networkctl
IDX LINK             TYPE               OPERATIONAL SETUP     
  1 lo               loopback           carrier     unmanaged 
  2 enp2s0           ether              routable    configured
  3 enp3s0           ether              degraded    configured
  4 enp4s0           ether              degraded    configured
  5 enp5s0           ether              degraded    configured
  6 lxcbr0           ether              routable    unmanaged 
  7 br-vxlan         ether              routable    configured
  8 br-vlan          ether              degraded    configured
  9 br-mgmt          ether              routable    configured
 10 vlan11           ether              degraded    configured
 11 vlan10           ether              degraded    configured

You should have configured in the SETUP column for all of the interfaces you created. Some interfaces will show as degraded because they are missing an IP address (which is intentional for most of these interfaces).

The post OpenStack-Ansible networking on CentOS 7 with systemd-networkd appeared first on major.io.

RHEL 7 STIG v1 updates for openstack-ansible-security

Par Major Hayden

OpenStack LogoDISA’s final release of the Red Hat Enterprise Linux (RHEL) 7 Security Technical Implementation Guide (STIG) came out a few weeks ago and it has plenty of improvements and changes. The openstack-ansible-security role has already been updated with these changes.

Quite a few duplicated STIG controls were removed and a few new ones were added. Some of the controls in the pre-release were difficult to implement, especially those that changed parameters for PKI-based authentication.

The biggest challenge overall was the renumbering. The pre-release STIG used an unusual numbering convention: RHEL-07-123456. The final version used the more standardized “V” numbers, such as V-72225. This change required a substantial patch to bring the Ansible role inline with the new STIG release.

All of the role’s documentation is now updated to reflect the new numbering scheme and STIG changes. The key thing to remember is that you’ll need to use --skip-tag with the new STIG numbers if you need to skip certain tasks.

Note: These changes won’t be backported to the stable/ocata branch, so you need to use the master branch to get these changes.

Have feedback? Found a bug? Let us know!

The post RHEL 7 STIG v1 updates for openstack-ansible-security appeared first on major.io.

Takeaways from Bruce Schneier’s talk: “Security and Privacy in a Hyper-connected World”

Par Major Hayden

IBM Interconnect 2017 Bruce SchneierBruce Schneier is one of my favorite speakers when it comes to the topic of all things security. His talk from IBM Interconnect 2017, “Security and Privacy in a Hyper-connected World“, covered a wide range of security concerns.

There were plenty of great quotes from the talk (scroll to the end for those) and I will summarize the main takeaways in this post.

People, process, and technology

Bruce hits this topic a lot and for good reason: a weak link in any of the three could lead to a breach and a loss of data. He talked about the concept of security as a product and a process. Security is part of every product we consume. Whether it’s the safety of the food that makes it into our homes or the new internet-connected thermostat on the wall, security is part of the product.

The companies that sell these products have a wide variety of strategies for managing security issues. Vulnerabilities in an internet-connected teapot are not worth much since there isn’t a lot of value there. It’s probably safe to assume that a teapot will have many more vulnerabilities than your average Apple or Android mobile device. Vulnerabilities in those devices are extremely valuable because the data we carry on those devices is valuable.

Certainty vs. uncertainty

The talk moved into incident response and how to be successful when the worst happens. Automation only works when there’s a high degree of certainty in the situation. If there are variables that can be plugged into an algorithm and a result comes out the other end, automation is fantastic.

Bruce recommended using orchestration when tackling uncertain situations, such as security incident responses. Orchestration involves people following processes and using technology where it makes sense.

He talked about going through TSA checkpoints where metal detectors and x-ray scanners essentially run the show. Humans are around when these pieces of technology detect a problem. If you put a weapon into your carry on, the x-ray scanner will notify a human and that human can take an appropriate response to escalate the problem. If a regular passenger has a firearm in a carry-on bag, the police should be alerted. If an Air Marshal has one, then the situation is handled entirely differently — by a human.

One other aspect he noted was around the uncertainty surrounding our data. Our control over our data, and our control over the systems that hold our data, is decreasing. Bruce remarked that he has more control over what his laptop does than his thermostat.

OODA loop

Bruce raised awareness around the OODA loop and its value when dealing with security incidents. Savvy readers will remember that the OODA loop was the crux of my “Be an inspiration, not an impostor” talk about impostor syndrome.

His point was that the OODA loop is a great way to structure a response during a stressful situation. When the orchestration works well, the defenders can complete an OODA loop faster than their adversaries can. When it works really well, the defenders can find ways to disrupt the adversaries’ OODA loops and thwart the attack.

Quotes

I tried to capture as many of the memorable quotes on Twitter as they happened. It’s certainly possible — perhaps likely — that I’ve missed a few words in the quotes. I apologize in advance to Bruce if I’ve mangled any of his words here.

"Internet security will become everything security." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"Security is a product and a process. What's changing are the ratios." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"Automation requires certainty. If you need flexibility, you need people." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"In a world of certainty, the focus is on data. In a world of uncertainty, the focus is on understanding." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"Certainty: centralization. Uncertainty: decentralization." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"Incident response is fundamentally uncertain. That's why it's difficult to automate." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"If you can't remove the people, make them effective." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"As soon as you get to a situation that needs judgement, people go to the foreground." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"The union of people, process and technology is orchestration. Tech where it works, people where it's necessary." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"If your OODA loop can move faster than the attacker, you have the advantage." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"There's a cognitive bias against spending on security." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"We are good at network security. We are bad at end device security." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

"We're losing control of our data and the systems that process our data." #ibminterconnect

— Major Hayden (@majorhayden) March 21, 2017

The post Takeaways from Bruce Schneier’s talk: “Security and Privacy in a Hyper-connected World” appeared first on major.io.

Five reasons why I’m excited about POWER9

Par Major Hayden

There’s plenty to like about the POWER8 architecture: high speed interconnections, large (and flexible) core counts, and support for lots of memory. POWER9 provides improvements in all of these areas and it has learned some entirely new tricks as well.

Here are my top five reasons for getting excited about POWER9:

NVLink 2.0

In the simplext terms, NVLink provides a very high speed interface between CPUs and GPUs with very low latency. This is quite handy for software that needs to exchange large amounts of data with GPUs. Machine learning can get a significant performance boost with NVLink.

NVLink 2.0 connects CPUs and GPUS with a 25GB/sec link (per lane). That’s not all — GPUs can communicate with each other over their own independent lanes. Drop in a few NVIDIA’s Tesla P100 GPUs and you will have an extremely powerful accelerated system. NVIDIA’s next generation GPUs, codenamed “Volta”, will take this to the next level.

CAPI 2.0

The Coherent Accelerator Processor Interface (CAPI) allows the CPU to quickly access accelerators (think ASICs and FPGAs) over a high bandwidth interface with very low latency. CAPI 2.0 gets a 4x performance bump in POWER9 since it uses PCI-Express Gen 4.

The OpenCAPI 3.0 interface is also available, but it doesn’t use PCI-Express like CAPI does. It has an open interface with 25GB/sec of bandwidth and it uses direct memory access to perform operations very quickly.

On-chip acceleration

POWER9 provides more acceleration for common tasks right on the chip itself. This includes the common functions, like cryptography, but it also accelerates compression. The chip will accelerate gzip compression, 842 compression and AES/SHA. It also has a true random number generator built in.

Another nice on-chip benefit is the virtualization acceleration. No hypervisor calls are needed (this depends on your hypervisor choice) and this allows for user mode invocation of virtualization actions.

Multiple core options

POWER9 comes in two flavors: SMT8 and SMT4. SMT8 is geared towards the PowerVM platform and provides the strongest individual threads. This makes it great for larger PowerVM partitions that need lots of cores. SMT4 is designed more for Linux workloads.

The chip can handle 64 instructions per cycle on the SMT4 and 128 instructions on the SMT8. There are also some compiler benefits that can improve performance for modern codebases.

OpenPOWER Zaius

I’d be remiss if I didn’t mention Rackspace’s contributions to the Zaius P9 server! Zaius is a spec for an Open Compute POWER9 server. Google, Rackspace, IBM and Ingrasys have been working together to build this server for the masses.

The post Five reasons why I’m excited about POWER9 appeared first on major.io.

IBM Interconnect 2017 first day keynote recap

Par Major Hayden

IBM Interconnect 2017 Keynote Will SmithThe “Welcome to Interconnect” keynote kicked off the first day of IBM Interconnect 2017. The Mandalay Bay Event Center was quite full (which was evident as we began to file out!) and the speakers were engaging.

Some of the most memorable talks came from Indiegogo, Delos, and of course, Will Smith.

Indiegogo

Indiegogo is probably a familiar name for most people who know about crowdfunding. Over $1.1B has been raised by over 8 million backers since Indiegogo got started. They showcased three crowdfunded projects during the keynote and allowed audience members to vote for each one! The talk moved quickly, but it sounds like IBM committed to contribute one dollar per vote given to a project!

One of the most interesting one was Waterbot. It detects subtle changes in water quality and can alert people to the nature of the change.

Another project was SmartPlate. It detects the quantity of food on the plate and what type of food is on the plate. It can make estimates on the contents of the food (calories, fat, etc) and could potentially warn someone if they’re about to eat something on their list of allergies.

Delos

Delos is in the business of making buildings work better for the people who work in them. They dig deep into everything from lighting to temperature to window placement and understand the best combination of environmental factors that enable people to do great work. It’s challenging work because changing buildings is costly and different types of industries may do better in different environments.

Will Smith

The “one more thing” moment of the morning was an interview with Will Smith. It may seem odd in the context of a technical conference, but he provided lots of good advice for people who are trying to transform themselves and their companies. He shared a lot about the early days of his career and how he transformed himself from a “clean rapper” to a TV and movie star.

There were several quotable (and often hilarious) moments:

Will Smith's advice for startups: remember to pay the IRS. :) #ibminterconnect

— Major Hayden (@majorhayden) March 20, 2017

"If somebody asks you to do something, say yeah, and then go figure out how to do it." — Will Smith #ibminterconnect

— Major Hayden (@majorhayden) March 20, 2017

"I'm like an African American Watson!" — Will Smith #ibminterconnect

— Major Hayden (@majorhayden) March 20, 2017

Two things stuck with me the most: what it was like to portray a hero in film (like Muhammad Ali) and how important it is to always have a simple mission statement:

"It's powerful when you get to compare yourself to these heroes." — Will Smith #ibminterconnect

— Major Hayden (@majorhayden) March 20, 2017

"I've always had a very simple mission statement: improve lives." — Will Smith #ibminterconnect

— Major Hayden (@majorhayden) March 20, 2017

He talked about how he approached new challenges and forks in the road. Each time he approached one, he asked himself: “Will this improve people’s lives?” If the answer was yes, he signed on and put his maximum effort behind it. It’s a simple question, but he talked about how it helped him get through some highly complex decisions.

The post IBM Interconnect 2017 first day keynote recap appeared first on major.io.

Reflecting on 10 years of (mostly) technical blogging

Par Major Hayden

Blog typewriterIt all started shortly after I joined Rackspace in December of 2006. I needed a place to dump the huge amounts of information I was learning as an entry-level Linux support technician and I wanted to store everything in a place where it could be easily shared. The blog was born!

The blog now has over 700 posts on topics ranging from Linux system administration to job interview preparation. I’ll get an email or a tweet once every few weeks from someone saying: “I ran into a problem, Googled for it, and found your blog!” Comments like that keep me going and allow me to push through the deepest writer’s block moments.

The post titled “Why technical people should blog (but don’t)” is one of my favorites and I get a lot of feedback about it. Many people still feel like there’s no audience out there for the things they write. Just remember that someone, somewhere, can learn something from you and from your experiences. Write from the heart about what interests you and the readers will gradually appear. It’s a Field of Dreams moment.

Thanks to everyone who has given me support over the years to keep the writing going!

The post Reflecting on 10 years of (mostly) technical blogging appeared first on major.io.

OpenStack isn’t dead. It’s boring. That’s a good thing.

Par Major Hayden

Atlanta Georgia OpenStack PTG

NOTE: The opinions shared in this post are mine alone and are not related to my employer in any way.


The first OpenStack Project Teams Gathering (PTG) event was held this week in Atlanta. The week was broken into two parts: cross-project work on Monday and Tuesday, and individual projects Wednesday through Friday. I was there for the first two days and heard a few discussions that started the same way.

Everyone keeps saying OpenStack is dead.
Is it?

OpenStack isn’t dead. It’s boring.

“The report of my death was an exaggeration”

Mark Twain said it best, but it works for OpenStack as well. The news has plenty of negative reports that cast a shadow over OpenStack’s future. You don’t have to look far to find them:

This isn’t evidence of OpenStack’s demise, but rather a transformation. Gartner called OpenStack a “science project” in 2015 and now 451 Research Group is saying something very different:

451 Research Group estimates OpenStack’s ecosystem to grow nearly five-fold in revenue, from US$1.27 billion market size in 2015 to US$5.75 billion by 2020.

A 35% CAGR sounds pretty good for a product in the middle of a transformation. In Texas, we’d say that’s more than enough to “shake a stick at”.

The transformation

You can learn a lot about the transformation going on within OpenStack by reading analyst reports and other news online. I won’t go into that here since that data is readily available.

Instead, I want to take a look at how OpenStack has changed from the perspective of a developer. My involvement with OpenStack started in the Diablo release in 2011 and my first OpenStack Summit was the Folsom summit in San Francisco.

Much of the discussion at that time was around the “minutiae” of developing software in its early forms. We discussed topics like how to test, how to handle a myriad of requirements that constantly change, and which frameworks to use in which projects. The list of projects was quite short at that time (there were only 7 main services in Grizzly). Lots of effort certainly poured into feature development, but there was a ton of work being done to keep the wheels from falling off entirely.

The discussions at this week’s PTG were very different.

Most of the discussion was around adding new integrations, improving reliability, and increasing scale. Questions were asked about how to integrate OpenStack into existing enterprise processes and applications. Reliability discussions were centered less around making the OpenStack services reliable, but more around how to increase overall resiliency when other hardware or software is misbehaving.

Discussions or arguments about minutiae were difficult to find.

Boring is good

I’m not trying to say that working with OpenStack is boring. Developing software within the OpenStack community is an enjoyable experience. The rules and regulations within most projects are there to prevent design mistakes that have appeared before and many of these sets of rules are aligned between projects. Testing code and providing meaningful reviews is also straightforward.

However, the drama, both unproductive and productive, that plagued the project in the past is diminishing. It still exists in places, especially when it comes to vendor relationships. (That’s where most open source projects see their largest amounts of friction, anyway.)

This transformation may make OpenStack appear “dead” to some. The OpenStack community is solving different problems now. Many of them are larger and more difficult to solve. Sometimes these challenges take more than one release to overcome. Either way, many OpenStack developers are up for these new challenges, even if they don’t make the headlines.

As for me: bring on the boring. Let’s crush the hard stuff.


Photo credit: By Mike (Flickr: DSC_6831_2_3_tonemapped) [CC BY 2.0], via Wikimedia Commons

The post OpenStack isn’t dead. It’s boring. That’s a good thing. appeared first on major.io.

What I’m looking forward to at IBM Interconnect 2017

Par Major Hayden

Mandalay Bay Las Vegas IBM Interconnect 2017IBM Interconnect 2017 is coming up next month in Las Vegas. Last year’s conference was a whirlwind of useful talks, inspiring hallway conversations, and great networking opportunities. I was exhausted by the week’s end, but it was totally worth it.

One of my favorite sessions from last year was Tanmay Bakshi’s keynote. It was truly inspiring to see someone so young take command of such a large stage and teach us all something. I can’t wait to hear another of his talks.

Open Technology Summit (OTS)

This year, I’m interested to see the talks at the Open Technology Summit. It’s essentially a group of lightning talks from industry experts on a variety of topics. This year’s agenda has talks on OpenStack, Cloud Foundry, Blockchain, and others. Although the event is three hours long, it feels a lot shorter than that since each talk is anywhere from 10-20 minutes long.

The OTS starts on Sunday afternoon, so be sure to arrive in Las Vegas early enough to attend.

Security

Information security is always an interesting topic and I have my eye on the following sessions:

IT Transformation

Open Source

I hope to see you there!

Photo credit: By Ronnie Macdonald from Chelmsford, United Kingdom (Mandalay Bay) [CC BY 2.0], via Wikimedia Commons

The post What I’m looking forward to at IBM Interconnect 2017 appeared first on major.io.

systemd-networkd on Ubuntu 16.04 LTS (Xenial)

Par Major Hayden

My OpenStack cloud depends on Ubuntu, and the latest release of OpenStack-Ansible (what I use to deploy OpenStack) requires Ubuntu 16.04 at a minimum. I tried upgrading the servers in place from Ubuntu 14.04 to 16.04, but that didn’t work so well. Those servers wouldn’t boot and the only recourse was a re-install.

Once I finished re-installing them (and wrestling with several installer bugs in Ubuntu 16.04), it was time to set up networking. The traditional network configurations in /etc/network/interfaces are fine, but they weren’t working the same way they were in 14.04. The VLAN configuration syntax appears to be different now.

But wait — 16.04 has systemd 229! I can use systemd-networkd to configure the network in a way that is a lot more familiar to me. I’ve made posts about systemd-networkd before and the simplicity in the configurations.

I started with some simple configurations:

root@hydrogen:~# cd /etc/systemd/network
root@hydrogen:/etc/systemd/network# cat enp3s0.network 
[Match]
Name=enp3s0

[Network]
VLAN=vlan10
root@hydrogen:/etc/systemd/network# cat vlan10.netdev 
[NetDev]
Name=vlan10
Kind=vlan

[VLAN]
Id=10
root@hydrogen:/etc/systemd/network# cat vlan10.network 
[Match]
Name=vlan10

[Network]
Bridge=br-mgmt
root@hydrogen:/etc/systemd/network# cat br-mgmt.netdev 
[NetDev]
Name=br-mgmt
Kind=bridge
root@hydrogen:/etc/systemd/network# cat br-mgmt.network 
[Match]
Name=br-mgmt

[Network]
Address=172.29.236.21/22

Here’s a summary of the configurations:

  • Physical network interface is enp3s0
  • VLAN 10 is trunked down from a switch to that interface
  • Bridge br-mgmt should be on VLAN 10 (only send/receive traffic tagged with VLAN 10)

Once that was done, I restarted systemd-networkd to put the change into effect:

# systemctl restart systemd-networkd

Great! Let’s check our work:

root@hydrogen:~# brctl show
bridge name bridge id       STP enabled interfaces
br-mgmt     8000.0a30a9a949d9   no      
root@hydrogen:~# networkctl
IDX LINK             TYPE               OPERATIONAL SETUP     
  1 lo               loopback           carrier     unmanaged 
  2 enp2s0           ether              routable    configured
  3 enp3s0           ether              degraded    configured
  4 enp4s0           ether              off         unmanaged 
  5 enp5s0           ether              off         unmanaged 
  6 br-mgmt          ether              no-carrier  configuring
  7 vlan10           ether              degraded    unmanaged 

7 links listed.

So the bridge has no interfaces and it’s in a no-carrier status. Why? Let’s check the journal:

# journalctl --boot -u systemd-networkd
Jan 15 09:16:46 hydrogen systemd[1]: Started Network Service.
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: netdev exists, using existing without changing its parameters
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Could not append VLANs: Operation not permitted
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Failed to assign VLANs to bridge port: Operation not permitted
Jan 15 09:16:46 hydrogen systemd-networkd[1903]: br-mgmt: Could not set bridge vlan: Operation not permitted
Jan 15 09:16:59 hydrogen systemd-networkd[1903]: enp3s0: Configured
Jan 15 09:16:59 hydrogen systemd-networkd[1903]: enp2s0: Configured

The Could not append VLANs: Operation not permitted error is puzzling. After some searching on Google, I found a thread from Lennart:

> After an upgrade, systemd-networkd is broken, exactly the way descibed
> in this issue #3876[0]

Please upgrade to 231, where this should be fixed.

Lennart

But Ubuntu 16.04 has systemd 229:

# dpkg -l | grep systemd
ii  libpam-systemd:amd64                229-4ubuntu13                      amd64        system and service manager - PAM module
ii  libsystemd0:amd64                   229-4ubuntu13                      amd64        systemd utility library
ii  python3-systemd                     231-2build1                        amd64        Python 3 bindings for systemd
ii  systemd                             229-4ubuntu13                      amd64        system and service manager
ii  systemd-sysv                        229-4ubuntu13                      amd64        system and service manager - SysV links

I haven’t found a solution for this quite yet. Keep an eye on this post and I’ll update it once I know more!

The post systemd-networkd on Ubuntu 16.04 LTS (Xenial) appeared first on major.io.

ICC color profile for Lenovo ThinkPad X1 Carbon 4th generation

Par Major Hayden

My new ThinkPad arrived this week and it is working well! The Fedora 25 installation was easy and all of the hardware was recognized immediately.

Hooray! pic.twitter.com/OiPSHREMLo

— Major Hayden (@majorhayden) January 9, 2017

However, there was a downside. The display looked washed out and had a strange tint. It seemed to be more pale than the previous ThinkPad. The default ICC profile in GNOME didn’t help much.

There’s a helpful review over at NotebookCheck that has a link to an ICC profile generated from a 4th generation ThinkPad X1 Carbon. This profile was marginally better than GNOME’s default, but it still looked a bit more washed out than what it should be.

I picked up a ColorMunki Display and went through a fast calibration in GNOME’s Color Manager. The low quality run finished in under 10 minutes and the improvement was definitely noticeable. Colors look much deeper and less washed out. The display looks very similar to the previous generation ThinkPad X1 Carbon.

Feel free to give my ICC profile a try on your 4th generation X1 and let me know what you think!

The post ICC color profile for Lenovo ThinkPad X1 Carbon 4th generation appeared first on major.io.

Display auditd messages with journalctl

Par Major Hayden

All systems running systemd come with a powerful tool for reviewing the system journal: journalctl. It allows you to get a quick look at the system journal while also allowing you to heavily customize your view of the log.

I logged into a server recently that was having a problem and I found that the audit logs weren’t going into syslog. That’s no problem — they’re in the system journal. The system journal was filled with tons of other messages, so I decided to limit the output only to messages from the auditd unit:

$ sudo journalctl -u auditd --boot
-- Logs begin at Thu 2015-11-05 09:20:01 CST, end at Thu 2017-01-05 09:38:49 CST. --
Jan 05 07:47:04 arsenic systemd[1]: Starting Security Auditing Service...
Jan 05 07:47:04 arsenic auditd[937]: Started dispatcher: /sbin/audispd pid: 949
Jan 05 07:47:04 arsenic audispd[949]: priority_boost_parser called with: 4
Jan 05 07:47:04 arsenic audispd[949]: max_restarts_parser called with: 10
Jan 05 07:47:04 arsenic audispd[949]: audispd initialized with q_depth=150 and 1 active plugins
Jan 05 07:47:04 arsenic augenrules[938]: /sbin/augenrules: No change
Jan 05 07:47:04 arsenic augenrules[938]: No rules
Jan 05 07:47:04 arsenic auditd[937]: Init complete, auditd 2.7 listening for events (startup state enable)
Jan 05 07:47:04 arsenic systemd[1]: Started Security Auditing Service.

This isn’t helpful. I’m seeing messages about the auditd daemon itself. I want the actual output from the audit rules.

Then I remembered: the kernel is the one that sends messages about audit rules to the system journal. Let’s just look at what’s coming from the kernel instead:

$ sudo journalctl -k --boot
-- Logs begin at Thu 2015-11-05 09:20:01 CST, end at Thu 2017-01-05 09:40:44 CST. --
Jan 05 07:46:47 arsenic kernel: Linux version 4.8.15-300.fc25.x86_64 (mockbuild@bkernel01.phx2.fedoraproject.org) (gcc version 6.2.1 20160916 (Red Hat 6.2.1-2
Jan 05 07:46:47 arsenic kernel: Command line: BOOT_IMAGE=/vmlinuz-4.8.15-300.fc25.x86_64 root=/dev/mapper/luks-e... ro rd.luks
Jan 05 07:46:47 arsenic kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 05 07:46:47 arsenic kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 05 07:46:47 arsenic kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 05 07:46:47 arsenic kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256

This is worse! Luckily, the system journal keeps a lot more data about what it receives than just the text of the log line. We can dig into that extra data with the verbose option:

$ sudo journalctl --boot -o verbose

After running that command, search for one of the audit log lines in the output:

_UID=0
    _BOOT_ID=...
    _MACHINE_ID=...
    _HOSTNAME=arsenic
    _TRANSPORT=audit
    SYSLOG_FACILITY=4
    SYSLOG_IDENTIFIER=audit
    AUDIT_FIELD_HOSTNAME=?
    AUDIT_FIELD_ADDR=?
    AUDIT_FIELD_RES=success
    _AUDIT_TYPE=1105
    AUDIT_FIELD_OP=PAM:session_open
    _SELINUX_CONTEXT=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
    _AUDIT_LOGINUID=1000
    _AUDIT_SESSION=3
    AUDIT_FIELD_ACCT=root
    AUDIT_FIELD_EXE=/usr/bin/sudo
    AUDIT_FIELD_GRANTORS=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix
    AUDIT_FIELD_TERMINAL=/dev/pts/4
    _PID=2666
    _SOURCE_REALTIME_TIMESTAMP=1483631103122000
    _AUDIT_ID=385
    MESSAGE=USER_START pid=2666 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'

One of the identifiers we can use is _TRANSPORT=audit. Let’s pass that to journalctl and see what we get:

$ sudo journalctl --boot _TRANSPORT=audit
-- Logs begin at Thu 2015-11-05 09:20:01 CST. --
Jan 05 09:47:24 arsenic audit[3028]: USER_END pid=3028 uid=0 auid=1000 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'
... more log lines snipped ...

Success! You can get live output of the audit logs by tailing the output:

sudo journalctl -af _TRANSPORT=audit

For more details on journalctl, refer to the online documentation.

The post Display auditd messages with journalctl appeared first on major.io.

❌