Bootstrap Your MySQL or MariaDB Galera Cluster Using Systemd

I had a brief power outage this morning, and I had to go lookup how to bootstrap my MaridDB Galera cluster again. I’m finally documenting it here to share it with you!



Use the commands below to start the first node in a MariaDB Galera Cluster or bootstrap a failed cluster. Make sure you run these commands on the most up-to-date node! The rest of the nodes can be started normally after the first one has been bootstrapped.

systemctl set-environment MYSQLD_OPTS="--wsrep-new-cluster"
sed -i 's/safe_to_bootstrap: 0/safe_to_bootstrap: 1/g' /var/lib/mysql/grastate.dat
systemctl start mariadb
systemctl unset-environment MYSQLD_OPTS

For more information on the second command, see this article under the section “Safe to Bootstrap” Protection. These commands were tested on CentOS 7 running MariaDB 10.1.38. Let me know if it works for you in the comments.

Should IT Professionals Learn to Code?

Do you have a non-development career in technology? Do you ever ask yourself if it would be worth the time to learn to code? If so, rest assured; the answer is absolutely YES! But what do you have to gain by learning a programming language or two?


You will have a huge competitive edge in your industry.

My ability to troubleshoot and understand technology has increased exponentially since I made the decision to learn to develop. Programming forces you to understand what you’re doing at a deeper level. The first language I learned was PHP. This has led to my understanding of web servers, SEO, security, TLS/SSL, and enabled me to create websites. That is a short list, but this point cannot be stressed enough: You will have a profound new way of seeing and understanding how things work and why they work the way they do.

Automate your life!

Do you hate mundane, repetitive tasks? Me too, so I refuse to do them! Large-scale changes in Active Directory, migrating print servers, software installation, configuration changes, OS image creation, audits, the list is endless. Sometimes the tasks I’ve worked on were expected to take days, but I discovered ways to complete them within hours. Unfortunately, I’m too much of a show off to sandbag my work to enjoy the free time, but that too has paid off in raises and promotions.

Add value to the company you work for.

I work in the healthcare industry. IT isn’t adding value to the company. It’s just another cost of doing business. If your healthcare employer thinks he or she can save money by outsourcing IT, you’ll be out of a job. Think “the cloud…” Your customers are your “users,” and often times their perception is just that IT is always changing things and making their jobs harder. What if you could do something to automate their mundane tasks, saving them time, making them more productive and simplifying their jobs? This can change their perception of IT, save your company money and make them more efficient, allowing them to increase profits. Happy users and happy employer? Win-win!

This also has the potential to lead to promotions into positions that didn’t previously exist. If you can show your company ways of adding value that they’ve never thought of, it will set you apart in a big way. This proved especially true for me when I developed business intelligence dashboards, allowing senior management to get insight into real-time operations at a glance. If you want to stop punching the clock and have your performance measured in results rather than time and effort, this is a great way to do it! Can you say “job security?”

Become a creator.

Humans love to create! It’s in our DNA from birth, but building club-houses and whatever else it is we may want to do as adults can be expensive. Learning to write code can be free, and turning one of your ideas into a reality doesn’t have to cost you a dime. It also delivers the same pride and satisfaction you achieve from anything else you might create.

Become an entrepreneur or freelancer.

Being employed is fine, but if you have big ambitions in life to one day start your own business or otherwise become independent, your ability to code can be a huge asset.

Share your code with the world.

Your code may not be something you can sell, or perhaps you have no interest in marketing. You can host it on a GitHub repository for others to use. I see writing code sometimes like playing “The Sims.” A lot of satisfaction comes from building something and watching others use it. Interacting with other developers who are interested in using your code is also gratifying, and nothing compares to receiving an email thanking you for your work.

In conclusion, learning to code has absolutely improved my life. Everything we learn changes the way we perceive and interact with the world. When I see problems, I begin to imagine solutions in my mind. No more endless googling for a solution that may not even exist. Start learning to code today and become the solution!

To get started, check out the programming courses at Udemy.com.

What do you think? Leave your opinion in the comments!

Control Anything with Alexa Using Node.JS

I recently watched some interesting YouTube videos demonstrating the use of a python library called “fauxmo” to create fake Wemo plugs that Alexa can control. Unfortunately my python-fu isn’t as strong as I’d like it to be, so I searched npm for a similar library in Node.JS. The one I found wasn’t working for me, and I noticed others were complaining about the same thing in the repository’s GitHub issues. It was then I decided to make a new library, and thus node-fauxmo was born!



These personal projects are a lot to me like playing The SIMs. I like to create something and then watch the world use it. I don’t feel like this project is getting the attention it deserves, so I’m going to break down the installation, setup and provide a basic script to get you started.

Step 1 – Download and Install Node.JS

For starters, you’ll need to download and install Node.JS. Windows and macOS users will need to download it from here https://nodejs.org/en/download/. Linux users can install using their native package manager. I recommended searching for instructions specific to the distribution you’re running. Trust me, there are tons of them!

Step 2 – Make a Directory for Your Project

Make a directory somewhere to hold your project, launch a command prompt or terminal, and change into the directory you just created. In the example below, I’m making a folder on my Windows 10 Desktop named “control-alexa”.

C:\Users\Lyas> mkdir Desktop\control-alexa

C:\Users\Lyas> cd Desktop\control-alexa

C:\Users\Lyas\Desktop\control-alexa>

Step 3 – Download the node-fauxmo Library

Type the following command so that “node package manager” will download the node-fauxmo library for you to use in your project. You’ll notice a node_modules directory gets created containing the libraries and dependencies needed.

npm install node-fauxmo

Step 4 – Create the Configuration for Your Devices

Create a file named index.js in the directory and paste the following code into it.

'use strict';

const FauxMo = require('node-fauxmo');

let fauxMo = new FauxMo(
{
	devices: [{
		name: 'Fake Device 1',
		port: 11000,
		handler: function(action) {
			console.log('Fake Device 1:', action);
		}
	},
	{
		name: 'Fake Device 2',
		port: 11001,
		handler: function(action) {
			console.log('Fake Device 2:', action);
		}
	}]
});

This example will make 2 devices named “Fake Device 1” and “Fake Device 2”. They will listen on ports 11000 and 11001. Each device will need to listen on its own unique port. You can change this to anything you’d like usually between 1024 and 65,535 as long as it doesn’t conflict with ports used by an existing process. To make sure you’ve got everything where it belongs, here’s a screenshot of my directory. The package-lock.json file was created automatically and is not important in this case.

Step 5 – Start Your Devices!

Start your “fake” devices by running the command node index.js. This will cause node to start listening on UDP port 1900 for SSDP Discovery requests as well as listen on any TCP ports specified in your index.js configuration for On/Off requests from Alexa.

node index.js

If everything is working correctly, you should see output like the below text.

C:\Users\Lyas\Desktop\control-alexa>node index.js
Adding multicast membership for 192.168.1.198
Adding multicast membership for 127.0.0.1
server is listening on 11000
server is listening on 11001

If you get an error like “Error: bind EADDRINUSE 192.168.1.198:1900,” you’ll need to find which process is already listening on port 1900 and stop it. In my case, I am running Windows 10, and I needed to stop the native “SSDP Discovery” service. If this is the case for you too, you can use the command “net stop SSDPSRV” to stop it.

net stop SSDPSRV

Step 6 – Tell Alexa to Discover New Wemo Plugs

Finally launch your Alexa companion app and tell it to discover new Belkin Wemo plugs.

After the devices have been discovered, you can tell Alexa, “Turn on/off Fake Device 1.” In the output of the console/terminal you will see output like the below by default with “1” representing on and “0” representing off.

Fake Device 2: 1
Fake Device 2: 0
Fake Device 1: 1
Fake Device 1: 0
Fake Device 1: 1
Fake Device 1: 0

Now it’s time to get creative and write some custom javascript into the “handler” callback for your devices! It also comes in handy to make some Alexa “routines” with custom phrases more appropriate to whatever you may program her to do. For some inspiration, checkout my YouTube video where I get Alexa to launch Chrome and lock my PC using node-fauxmo. https://www.youtube.com/watch?v=tbjVvMIh810. Let me know in the comments what kinds of interesting, custom things you’re able to get Alexa to do and if you’re using some cool hardware like the Raspberry Pi to host your fake Wemo devices.

Get an A+ with Qualys SSL Labs Server Test on an Apache Web Server

Anyone responsible for hosting web services protected by SSL/TLS should be at least curious about how they might score against Qualys SSL Labs Server Test. I know I was when I first became aware of the tool. The results may surprise you, and you’ll probably learn a lot if you actually put the effort into securing and optimizing your configuration to get a higher score. I’d like to share some of my Apache configurations to hopefully save some folks out there a little time and raise awareness about web security.



I’ll start by removing all configurations I’ve added to achieve my A+ score, and we’ll slowly tighten the screws to see the effect each configuration has on the results of the test.

Ouch! If I’m being honest, I may have intentionally sabotaged my Apache config a little to get a score like this. Turns out if you’re running a fully patched CentOS 7 web server with Apache 2.4.6, it does an ok job of being secure out of the box. I enabled all possible ciphers, excluded the secure ones, and used a 1024-bit certificate issued by an untrusted CA to add a little dramatic effect. I tried to make things a little worse by enabling SSLv2 and v3, but they are no longer supported with the version of Apache I am using. Because I am unable to use that as an example here, just make sure you have a line like this in your Apache configuration to ensure all insecure SSL/TLS protocols are disabled.

SSLProtocol all -SSLv2 -SSLv3 -TLSv1

For our first change, let’s fix that certificate by requesting one using Let’s Encrypt with at least a 2048-bit private key. Make sure to check out CertificateTools.com for a super easy way to generate your CSR. It supports RSA and ECC keys and provides the OpenSSL commands so you can run them locally!

That’s good progress. We’ve gotten rid of a few warnings, but we still have an ugly “F”. Next we’ll make some changes to the supported ciphers.

Excellent! In these results, we notice that the server does not support “Forward Secrecy.” I intentionally left out the ECDHE suite of ciphers just to bring attention to this and stress the importance of making sure these ciphers are enabled. For our final cipher hardening and to fully support perfect forward secrecy, we need to make sure the following lines exist in our config.

SSLHonorCipherOrder on
SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4

Looking good! Now to finally get our server to score that A+, we need to enable HTTP Strict Transport Security (HSTS). This is simply an additional web header that is stored in a browser for the amount of time specified in the header that tells the browser to force the use of HTTPS. This prevents software like SSLStrip from intercepting web requests and convincing your browser to use HTTP instead. This is a simple security feature that can be enabled by just adding a plugin to your wordpress, but Apache gives us a great way to enable it within our config using the following line.

Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"

Perfect! Now we can technically go one step further with our HTTPS security by creating some additional headers to support a feature called HTTP Public Key Pinning (HPKP). This will tell your browser to store at least one certificate in the chain upon its first visit to a given site. If the next visit doesn’t contain the cached certificate in the chain, it will prevent the user from being able to visit the site. This is extremely effective at preventing man-in-the-middle (MITM) attacks, but requires a strong understanding of how it works and lots of diligence to maintain it properly. Currently only Chrome, Firefox and Opera support HPKP, and Chrome has announced plans to remove support for it because of the possibility for an attacker to install malicious pins or for a site operator to accidentally block visitors. Given that, it’s not something I would recommend, but I want to at least touch on it for completeness.

I hope this article was helpful and informative. Please leave questions and comments below.

Restore VHD Image to Disk/Partition with Linux dd

Convert a VHD image from a native Windows backup to raw format using qemu-img, and write it directly to a disk or partition with the Linux dd command

I’ve recently been evaluating native Windows Server Backup as an option for bare-metal backup and recovery for our remaining physical servers at work. The utility creates several XML files and a VHD image for each partition it backs up. It seems to work ok for the most part, but I ran into a problem when I came across a system that for some unknown reason had a 38MB boot partition with insufficient space to create a VSS snapshot, thus preventing the tool from properly backing up the partition. I’ve read all the articles about allocating storage on a separate partition to get VSS to behave, but I could never get it to function correctly.



This got me thinking… I have some personal trust issues with the reliability of Microsoft products to begin with and now I’m having these problems which are just reinforcing the fear of something going wrong during the restore process. This lead me to start researching restore options using the VHD files produced by the native Windows Server Backup.

Option 1 is to do a restore using a Windows installation CD and select the restore option. Option 2 is to mount the VHD and manually copy files. This option is really only good for individual file restores. Obviously this is not something you’d want to do for a bare metal restore. Option 3 is to restore the VHD image directly to disk. This is the option I was most interested in and it made sense to me that there would be a straightforward way of doing this as every other bare metal backup solution I’ve used had this sort of option. While searching for a tool to do a VHD to disk image I found “VHD2Disk”. Unfortunately this tool was designed to do just that… write a VHD to DISK. No option for writing to a partition. Feel free to correct me if I’m wrong, but I see no way of ever getting 2 partitions on a single disk with this tool which makes it useless for my purposes.

After finding that there were no tools to do a image to disk write I became curious to find out if there was a way to just use the dd command in linux. After all, I would have instinctively turned to dd if this were something I were doing with linux. dd does block by block copying and a block is a block regardless of which OS you’re using. I quickly learned this is not something that can be done directly with a VHD because they are not in raw format, but the qemu-img supports VHD format and has the ability to convert them to raw image format. Below you will find the detailed instructions of how to convert the VHD, and use dd to write your new image to disk or partition. I’ll also give some details about getting the system to boot if you’re like me in this case and you don’t have a good backup of the boot partition.

The backup directory created by Windows Server Backup will look like this. I’ve highlighted the “interesting files” that I’ll mention throughout the article. The one ending in “Components.xml” has useful information about the disk partition layout that can come in handy when recreating partitions on your new disk. The .vhd file is the actual image data.

vhd image files

First thing we need to do is convert the VHD image to raw format. To do so, you’ll need access to a linux environment and use the qemu-img command. I’d recommend using either a Clonezilla or GParted liveCD as they both have all sorts of utilities pre-installed for disk imaging and partitioning. Boot the CD on the system you will be using for the restore image. When it finishes booting type the following commands to install qemu-img: (You may need to type sudo before each command if you’re not root. Keep this in mind for the remaining command as well.)

apt-get update
apt-get install qemu-utils -y

My backups are on a windows file share so I’ll use the following command to mount them to the /mnt directory:

mount -t cifs -o username=YOURUSERNAME,domain=YOURDOMAIN //HOSTNAME/PATH /mnt

You’ll be prompted for your password to mount the share. Be sure to replace YOURUSERNAME, YOURPASSWORD, HOSTNAME, and PATH with the information appropriate for your environment.

Next change into the directory containing the VHD file you need to convert: (the path used in this command may vary greatly depending on your environment)

cd /mnt/WindowsImageBackup/SERVERNAME/Backup\ 2016-08-05\ 133013/

Use the following command to convert the VHD image into raw format:

qemu-img convert -f vpc -O raw c9887432-6c68-11e0-a354-806e6f6e6963.vhd myserver.raw

You’ll need to repeat this command for any additional partitions you need to convert. Be sure to change the .vhd and .raw filenames to those appropriate for your environment. To be clear, the .vhd filename should be the one that exists in this directory like the highlighted file in the screenshot above, and the .raw filename can be whatever you want to name it.

You’ll notice a new file will be created that will reflect the full size of the partition for the data it contains. This is expected considering the nature of the raw image format.

-rwxr-xr-x 1 root root  668 Aug  5 13:51 BackupSpecs.xml
-rwxr-xr-x 1 root root  61G Aug  5 13:51 c9887432-6c68-11e0-a354-806e6f6e6963.vhd
-rwxr-xr-x 1 root root 149G Aug  5 21:39 myserver.raw

The conversion process can take a long time depending on the size of the partition. You can use the following command to output the status of the process: (I’ve noticed there is often some delay before the command writes to stdout)

root@debian# kill -SIGUSR1 `pidof qemu-img`
root@debian#     (22.03/100%)

Next you’ll need to create the 100MB boot partition (unless you’re restoring only a single partition and all others are fully intact) and any additional partitions the system originally had. I’ll assume you know how to do this, but you can use the output below for help if necessary. In the event that you don’t know the original partition layout, you can use the raw image size as a hint or the “Components.xml” file generated by Windows Server Backup in the backup directory for the server. With the values BytesPerSector, PartitionOffset, and PartitionLength contained in that file, you can re-create the exact partition table.

BytesPerSector / PartitionOffset = starting sector
(BytesPerSector / PartitionOffset) + (BytesPerSector / PartitionLength) = ending sector

fdisk /dev/sda

Welcome to fdisk (util-linux 2.27).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-335544319, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-335544319, default 335544319): +100M

Created a new partition 1 of type 'Linux' and of size 100 MiB.

Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 7
Changed type of partition 'Linux' to 'HPFS/NTFS/exFAT'.

Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2):
First sector (206848-335544319, default 206848):
Last sector, +sectors or +size{K,M,G,T,P} (206848-335544319, default 335544319):

Created a new partition 2 of type 'Linux' and of size 159.9 GiB.

Command (m for help): t
Partition number (1,2, default 2): 2
Partition type (type L to list all types): 7

Changed type of partition 'Linux' to 'HPFS/NTFS/exFAT'.

Command (m for help): p
Disk /dev/sda: 160 GiB, 171798691840 bytes, 335544320 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5dcc5434

Device     Boot  Start       End   Sectors   Size Id Type
/dev/sda1  *      2048    206847    204800   100M  7 HPFS/NTFS/exFAT
/dev/sda2       206848 335544319 335337472 159.9G  7 HPFS/NTFS/exFAT

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

If you created the 100MB boot partition, format it as NTFS and the default “System Reserved” label:

mkfs.ntfs -f /dev/sda1 -L "System Reserved"

The VHDs always store the image as a partition within the image which means we have to get the offset to determine where the data actually begins in the raw image before we write it to disk. Use the following command:

fdisk -l myserver.raw

Disk myserver.raw: 148.1 GiB, 158967767040 bytes, 310483920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device        Boot Start       End   Sectors   Size Id Type
myserver.raw1        128 310483071 310482944 148.1G  7 HPFS/NTFS/exFAT

The values “512” and “128” are what we need from this output. This tells us that the block size is 512 bytes and the partition starts at sector 128. Now we have all the information we need to give dd to write the image to our physical disk using this command:

dd if=myserver.raw bs=512 skip=128 of=/dev/sda2

You’ll need to repeat this command for any additional partitions you need to restore. You can use the following command to get the status of the dd process:

kill -SIGUSR1 `pidof dd`

369665+0 records in
369665+0 records out
189268480 bytes (189 MB) copied, 12.4497 s, 15.2 MB/s

When the dd command finishes writing the image to /dev/sda2, you should be able to mount the NTFS partition and view the files like so:

mkdir /sda2
mount -t ntfs-3g /dev/sda2 /sda2
ls -l /sda2
total 6267677
lrwxrwxrwx 2 root root         60 Jul 14  2009 Documents and Settings -> /sda2/Users
-rwxrwxrwx 1 root root 6327107584 Jul 31 11:03 pagefile.sys
drwxrwxrwx 1 root root       4096 Feb 17  2015 Patch Management
drwxrwxrwx 1 root root          0 Jul 14  2009 PerfLogs
drwxrwxrwx 1 root root       4096 Jun 17 07:44 ProgramData
drwxrwxrwx 1 root root       4096 Jul 21  2014 Program Files
drwxrwxrwx 1 root root       4096 Mar 22  2013 Program Files (x86)
drwxrwxrwx 1 root root          0 Apr 21  2011 Recovery
drwxrwxrwx 1 root root       8192 Apr 22 19:10 $Recycle.Bin
drwxrwxrwx 1 root root       4096 Aug  6 13:30 System Volume Information
drwxrwxrwx 1 root root       4096 May 30 14:16 Users
drwxrwxrwx 1 root root      24576 May 30 14:07 Windows

After you’ve restored all of your partitions, its time to reboot. Don’t forget to make sure the boot flag is on for your boot partition. If you didn’t have a copy of the boot partition, you’ll need to use the windows installation CD to repair the MBR. This usually involves a combination of the startup repair option available from the installation CD as well as some of the boot repair utilities that you can use from a command prompt on the windows installation CD. Like these:

bootrec /fixboot
bootrec /fixmbr
bootrec /rebuildbcd
cdromdriveletter:\boot\bootsect\bootrec /nt60 SYS /mbr

UPDATE:

I recently found a cool new way of mounting the VHD image directly and imaging from the virtual block device instead of waiting for the qemu-img conversion and using up all of your precious storage for the VHD image you already have as well as a raw copy of the data. Below are the commands to enable the NBD kernel module with the right arguments, mount the VHD image as a virtual block device, and perform a dd copy to your physical disk. This is assuming you’ve already booted the liveCD, installed the qemu-utils, mounted the media containing your backups, and changed directly to the path containing the VHDs.

rmmod nbd
modprobe nbd max_part=16
qemu-nbd -c /dev/nbd0 c9887432-6c68-11e0-a354-806e6f6e6963.vhd
dd if=/dev/nbd0p1 of=/dev/sda2

You can see I’m using /dev/nbd0p1 as the source for the dd command. This is because as I mentioned earlier in the article, each VHD image contains a partition. nbd0p1 is referencing the first partition (the only partition) in the nbd0 virtual block device. Previously we had to specify the block size and offset with the dd command to specify where the partition started. Use the following command to remove the virtual block device for the VHD image.

qemu-nbd -d /dev/nbd0

If you have any questions or if you found this post useful, please leave a comment!

Delete User Profiles Remotely Windows XP/Vista/7/2008/2012

In my workplace, our helpdesk has a need for the ability to quickly and easily delete user profiles remotely. I did a little tinkering with wbemtest and found I could call the Delete() method on any of the WMI objects returned by the query “SELECT * FROM Win32_UserProfile.” It will properly delete the profile’s associated files and registry keys the same way that the windows native GUI tools do it. The problem with the native tool however, is that you need to be logged in to use it. It is fairly slow and clunky, and you can only select one profile at a time for deletion. This accounts for a lot of wasted time. So I took what I learned and created a little vbs script that made some WMI calls and deleted profiles. This worked great, but the help desk needs the ability to selectively choose which profiles get deleted through some form of user interface. I wanted the simplest possible solution that required no dependencies. (.NET, AutoIT DLL’s, etc). I found the best way to do that was to make an HTA application.



The first version of my profile cleanup HTA was very basic but served its purpose well. The problem was everything was done using synchronous WMI calls. I’ve recently been playing with a lot of Node.js to understand this whole “non-blocking IO” asynchronous programming methodology, and it got me wondering if I could do the same with this HTA application. It’s not difficult to find examples online for creating WMI queries and calling methods asynchronously, but getting them to play nice in the HTA application proved to be a challenge. At least for me : ).

One problem I had was certain things only worked using jScript, while other things only worked using VBscript. Fortunately I found a way to use both and reference functions in both languages from either language. The next problem I had was finding a way to reference the “WbemScripting.SWbemSink” object within HTA. The way I found to do it was by referencing the object by its class ID like so:

<object id="oSink" classid="clsid:75718C9A-F029-11D1-A1AC-00C04FB6C223"></object>

My first attempts at improving the UI was to make the function calls using the setTimeout javascript function but that didn’t seem to change anything. To prevent the windows from freezing I had to do everything asynchronously within WMI. I’m including links to both versions of the application. The old version should really only be used for educational purposes for developers interested in a before and after demonstration of asynchronous WMI vs standard synchronous WMI. The second version works quite well and is safe to use in production. Just be sure you don’t accidentally delete some important data in a user’s profile. Any comments, suggestions, improvements or questions are welcome!

Profile Cleanup Utility - Delete User Profiles

Download HTA Delete User Profiles Utility

Profile Cleanup

Profile Cleanup_V2

GitHub Repo (Latest Build)

Two Factor Authentication with Freeradius for Horizon View

At work we were evaluating different options to enable two factor authentication for VMware Horizon View. They were all more than we were interested in paying and none had the ability to integrate with the communication platforms that we were interested in utilizing for delivering the PIN used as the “second factor”. Given that, my director gave me the opportunity to innovate and develop something custom.



Before we get started, you should know that I will not be providing a complete solution for two factor authentication with freeradius. My intention in this post is to demonstrate a working example of freeradius issuing an Access-Challenge response to a VMware View authentication request to achieve two factor authentication. Further development will be necessary to provide a full “solution”. (Integrating the freeradius perl module with LDAP or some other central authentication mechanism as well as deliver PINs and validate them.) If you have any questions in regards to how I achieved this, feel free to ask in the comments.

I had been looking for a good reason to play with freeradius and I finally had one. After some research within VMware’s documentation I knew I needed to learn how to get freeradius to send an “Access-Challenge” response.

https://pubs.vmware.com/view-52/index.jsp?topic=%2Fcom.vmware.view.administration.doc%2FGUID-73027CC6-8EA6-4887-A1F7-B40BF664E353.html
“If the RADIUS server issues an access challenge, View Client displays a dialog box similar to the RSA SecurID prompt for the next token code.”

Unfortunately, getting freeradius to do this is not well documented, but here are a few links I used for my research:
http://wiki.freeradius.org/guide/multiOTP-HOWTO
https://lists.freeradius.org/pipermail/freeradius-users/2008-August/030680.html
http://motp.sourceforge.net/
http://lists.freeradius.org/pipermail/freeradius-users/2011-January/051466.html
https://www.howtoforge.com/how-to-use-freeradius-with-linotp-2-to-do-two-factor-authentication-with-one-time-passwords
http://lists.freeradius.org/pipermail/freeradius-users/2012-May/060929.html
http://techtitude.blogspot.com/2014/12/freeradius-pap-challenge-authentication.html
http://lists.freeradius.org/pipermail/freeradius-users/2009-February/035675.html
http://www.mail-archive.com/freeradius-users@lists.freeradius.org/msg47441.html
http://lists.freeradius.org/pipermail/freeradius-users/2013-February/065099.html

I also read a few chapters from this book to get a better understanding of the configuration and inner workings of freeradius.

After all my research I used the example.pl code that comes with the freeradius perl module and modified the authenticate function like so:

sub authenticate {
        # For debugging purposes only
#       &log_request_attributes;
        if ($RAD_REQUEST{'State'} eq "0x6368616c6c656e6765") {
                if($RAD_REQUEST{'User-Password'} eq "1234") {
                        $RAD_REPLY{'Reply-Message'} = "Access granted";
                        return RLM_MODULE_OK;
                } else {
                        $RAD_REPLY{'Reply-Message'} = "Denied access by rlm_perl function";
                        return RLM_MODULE_REJECT;
                }
        } else {
                if($RAD_REQUEST{'User-Name'} eq "testusernamehere" && $RAD_REQUEST{'User-Password'} eq "testpasswordhere") {
                        $RAD_REPLY{'State'} = "challenge";
                        $RAD_CHECK{'Response-Packet-Type'} = "Access-Challenge";
                        $RAD_REPLY{'Reply-Message'} = "Enter your PIN.";
                } else {
                        $RAD_REPLY{'Reply-Message'} = "Denied access by rlm_perl function";
                        return RLM_MODULE_REJECT;
                }
        }
}

The code above is extremely bare-bones and serves only as an example to use the perl module with freeradius to send an authenticator an Access-Challenge response to an authentication request. You will want to modify the “testusernamehere” and “testpasswordhere” strings to something more appropriate and optionally the “1234” test PIN. This code first authenticates a user by validating their username and password. If it is successful, an Access-Challenge response is sent to the authenticator and the “State” AVP (Attribute-Value Pair) is set to “challenge”. When the authenticator receives the Access-Challenge it prompts for a PIN. When the PIN is entered, the request is processed by the first block of code because the text value of the “State” AVP (challeng) now matches the hexadecimal string “0x6368616c6c656e6765” in the first if statement. This happens because in the previous request we set the State AVP to be equal to “challenge” which is the text equivalent to the hexadecimal string “0x6368616c6c656e6765”. The same User-Name is sent as used previously, but this time User-Password must match “1234”. Any other PIN will cause authentication to fail.

Here are screenshots of the Horizon View client authentication behavior using a freeradius server with this configuration.

two factor authentication vmware view first factor

two factor authentication vmware view second factor

Show multicast IGMP group memberships on Cisco IOS, Windows, and Linux

I’ve been doing a lot of playing with multicast lately and I always have to google for a while to find these commands. I figured it was time to throw a post together for a quick reference. Hopefully someone else can benefit from this too.



Below you can find the commands to determine whether a system or switch port is a member of a multicast group on Cisco IOS, windows and linux. Multicast uses IGMP to join these groups and there is no way to join a group manually. The operating system does it automatically when an application requests it so these commands can come in handy when you’re trying to figure out why you’re not seeing the multicast traffic that you’re expecting.

Cisco IOS:

show ip igmp snooping groups

Windows:

netsh interface ip show joins

Linux:

ip maddress show

or

netstat -ng

Windows screen recording with FFmpeg UScreenCapture and NGINX RTMP module

I recently came up with a unique and free way to do screen recording and broadcasting by leveraging a few unrelated, open source software components. The intention is not for brief screen captures, but to permanently record. Meaning, begin the recording on logon/unlock and stop at logoff/lock with the ability to monitor the session live, hear audio from the local microphone, and optionally activate the webcam and overlay it in a corner of the view.



Here’s a high-level overview of how everything will work:

  • NGINX is running with the RTMP module ready to receive RTMP AV streams and record them, making a new file every 5 minutes
  • FFmpeg launches at logon/unlock sending an RTMP stream to NGINX either locally or on a server remotely. It will use the UScreenCapture DirectShow filter and optionally connect to a local microphone and/or webcam.
  • During streaming, the session can be viewed live. FFplay, VLC, or flowplayer will works for this.
  • FFmpeg is killed at logoff/lock and the recording is stopped on NGINX.
  • Recordings can be viewed with ffplay or VLC.

Here’s what you’ll need to get it working:

I’m providing the NGINX build I found because it has the RTMP module compiled in, I’ve already put the stats.xsn file from the RTMP module in the html directory, and it already has the necessary configuration. It may not be the latest build out there, so feel free to use it as a reference for a better download you can probably find elsewhere.

To get everything in place, extract your ffmpeg download into C:\ffmpeg. This way the executable will be located at C:\ffmpeg\bin\ffmpeg.exe. Do a normal “next, next, finish” install of UScreenCapture. Finally, download the nginx zip and extract it to C:\nginx so that the executable is located at C:\nginx\nginx.exe. Feel free to install these components in alternative locations, but understand that you will need to modify the commands I provide accordingly.

Before we get ahead of ourselves, let’s make sure everything is working correctly. Start by opening a command prompt and typing “C:\ffmpeg\bin\ffmpeg.exe -list_devices true -f dshow -i dummy”. We need to make sure that the dshow filter “UScreenCapture” is listed in the output.

C:\ffmpeg\bin\ffmpeg.exe -list_devices true -f dshow -i dummy
ffmpeg version N-73266-g4aa0de6 Copyright (c) 2000-2015 the FFmpeg developers
  built with gcc 4.9.2 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
  libavutil      54. 27.100 / 54. 27.100
  libavcodec     56. 45.101 / 56. 45.101
  libavformat    56. 40.100 / 56. 40.100
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 19.100 /  5. 19.100
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  2.100 /  1.  2.100
  libpostproc    53.  3.100 / 53.  3.100
[dshow @ 00000000032335c0] DirectShow video devices (some may be both video andaudio devices)
[dshow @ 00000000032335c0]  "USB Video Device"
[dshow @ 00000000032335c0]     Alternative name "@device_pnp_\\?\usb#vid_046d&pid_0825&mi_00#7&218d6046&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 00000000032335c0]  "UScreenCapture"
[dshow @ 00000000032335c0]     Alternative name "@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\UScreenCapture"
[dshow @ 00000000032335c0]  "screen-capture-recorder"
[dshow @ 00000000032335c0]     Alternative name "@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\{4EA69364-2C8A-4AE6-A561-56E4B5044439}"
[dshow @ 00000000032335c0] DirectShow audio devices
[dshow @ 00000000032335c0]  "Microphone (USB Audio Device)"
[dshow @ 00000000032335c0]     Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\Microphone (USB Audio Device)"
[dshow @ 00000000032335c0]  "virtual-audio-capturer"
[dshow @ 00000000032335c0]     Alternative name "@device_sw_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\{8E146464-DB61-4309-AFA1-3578E927E935}"
[dshow @ 00000000032335c0]  "Microphone (Realtek High Defini"
[dshow @ 00000000032335c0]     Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\Microphone (Realtek High Defini"
dummy: Immediate exit requested

In the same command prompt, do the following:

cd C:\nginx

start "" nginx.exe

That should start nginx in the background and you should be able to browse to http://127.0.0.1:81/ and see “Welcome to nginx for Windows!” I used port 81 in the configuration in C:\nginx\conf\nginx.conf to avoid conflict with other web servers that might be installed. If for some reason nginx isn’t working for you, check error.log located in C:\nginx\logs. If this is done in any sort of production configuration, I highly recommend compiling the latest build with the RTMP module on a linux server.

Now, from a command prompt, enter the command “C:\ffmpeg\bin\ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -rtbufsize 1500M -f dshow -i video=”UScreenCapture” -c:v libx264 -vf “scale=trunc(iw/2)*2:trunc(ih/2)*2″ -crf 40 -profile:v baseline -x264opts level=31 -pix_fmt yuv420p -preset ultrafast -f flv rtmp://127.0.0.1/view/%USERNAME%-%COMPUTERNAME%”. If you’d like you can use a streaming URL like rtmp://127.0.0.1/view/test. I like to try and use something that will be unique if multiple streams are being broadcasted, but something that is also meaningful.

C:\ffmpeg\bin\ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -rtbufsize 1500M -f dshow -i video="UScreenCapture" -c:v libx264 -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" -crf 40 -profile:v baseline -x264opts level=31 -pix_fmt yuv420p -preset ultrafast -f flv rtmp://127.0.0.1/view/%USERNAME%-%COMPUTERNAME%
ffmpeg version N-73266-g4aa0de6 Copyright (c) 2000-2015 the FFmpeg developers
  built with gcc 4.9.2 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
  libavutil      54. 27.100 / 54. 27.100
  libavcodec     56. 45.101 / 56. 45.101
  libavformat    56. 40.100 / 56. 40.100
  libavdevice    56.  4.100 / 56.  4.100
  libavfilter     5. 19.100 /  5. 19.100
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  2.100 /  1.  2.100
  libpostproc    53.  3.100 / 53.  3.100
Input #0, dshow, from 'video=UScreenCapture':
  Duration: N/A, start: 860828.177000, bitrate: N/A
    Stream #0:0: Video: rawvideo, bgr24, 3200x1200, 10 tbr, 10000k tbn, 10 tbc
[libx264 @ 000000000322bee0] frame MB size (200x75) > level limit (3600)
[libx264 @ 000000000322bee0] MB rate (150000) > level limit (108000)
[libx264 @ 000000000322bee0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2AVX
[libx264 @ 000000000322bee0] profile Constrained Baseline, level 3.1
[libx264 @ 000000000322bee0] 264 - core 146 r2538 121396c - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=10 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=40.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, flv, to 'rtmp://127.0.0.1/view/username-hostname':
  Metadata:
    encoder         : Lavf56.40.100
    Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 3200x1200, q=-1--1, 10 fps, 1k tbn, 10 tbc
    Metadata:
      encoder         : Lavc56.45.101 libx264
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
frame=   14 fps= 14 q=27.0 size=     134kB time=00:00:00.10 bitrate=10985.6kbits

If the stream is working properly, you should see some statistics at http://127.0.0.1:81/stats, and you should see recordings being generated within C:\nginx\recordings. Use VLC to play the recordings. To view the stream live with VLC click Media->Open Network Stream and enter the network URL “rtmp://192.168.164.110/view/username-computername”. Keep in mind that the username and computername here are case sensetive and should match exactly what is shown on the statistics page http://127.0.0.1:81/stats.

vlc

Be patient as it can take some time for VLC to detect the video codec before it begins displaying. You can press “q” or Ctrl+c to stop the ffmpeg stream.

I did my best to tweak the command so that there is a good balance of quality and efficiency, but if you’d prefer higher quality video try changing the -crf parameter to a lower value like 23 or a slower -preset value like “fast”. A word of caution, the slower the preset you choose, the higher your CPU utilization will be. The “scale=trunc(iw/2)*2:trunc(ih/2)*2” part of the command is to avoid “not divisible by 2” errors when either your height or width stream resolution is an odd number. I ran into this in our VDI environment because you can resize the screen of the client to be any size you want and will frequently have this problem.

If video is all you need, at this point you can simply run the following vbs script using task scheduler with a logon and unlock event as the trigger:

Option Explicit

Dim WshShell

Set WshShell = CreateObject("Wscript.Shell")

WshShell.Run "C:\ffmpeg\bin\ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -rtbufsize 1500M -f dshow -i video=""UScreenCapture"" -c:v libx264 -vf ""scale=trunc(iw/2)*2:trunc(ih/2)*2"" -crf 40 -profile:v baseline -x264opts level=31 -pix_fmt yuv420p -preset ultrafast -f flv rtmp://127.0.0.1/view/" & WshShell.ExpandEnvironmentStrings("%USERNAME%") & "-" & WshShell.ExpandEnvironmentStrings("%COMPUTERNAME%"), 0, False

task_scheduler

To kill ffmpeg at logoff/lock, use task scheduler again with the appropriate triggers and run the command taskkill /f /im ffmpeg.exe.

When I first set out to get screen recording working for my purposes, I was originally attempting to save directly to an MP4 over a CIFS share, but I still had to kill the ffmpeg process because obviously we want in running in the background and there is no way to interact with the process to stop it gracefully. Terminating the process in this way would corrupt the MP4. With NGINX receiving the RTMP stream and handling all of the recordings independently of ffmpeg, you are able to kill the process without corrupting the video files.

Be sure to do some testing to make sure ffmpeg is terminating and launching correctly during the events you are using to trigger it. It is a good idea to set up an idle timeout/screensaver that locks your workstation and kills ffmpeg’s stream to avoid wasting storage on useless video.

I’ll try to post some more flexible/dynamic scripts later to demonstrate how to capture audio from the local microphone and overlay a webcam. If you have any input or questions, please comment below.

Bash script to migrate all KVM or Xen virtual machines to another host with virsh/libvirt

I’m working on setting up two fully redundant servers to host all sorts of services from the house. Most of the HA is automated via keepalived scripts, but I needed another one to automatically migrate all VMs from one host to another using libvirt. This is analogous to putting an ESXi host in “maintenance mode”. I thought I share the bash script I threw together.



First make sure you can successfully migrate manually then replace the $HOST variable with your target host and give it a shot. The script will first migrate all live VMs and then do an offline migration of all powered off VMs. Enjoy!

#!/bin/bash

HOST="lyasnode1"

echo Migrating all VMs to $HOST

for VMS in `virsh list --name`; do echo Migrating VM $VMS live && virsh migrate --live --persistent --undefinesource $VMS qemu+ssh://lyasnode1/system; done

for VMS in `virsh list --all --name`; do echo Migrating VM $VMS offline && virsh migrate --offline --persistent --undefinesource $VMS qemu+ssh://lyasnode1/system; done