Archive for Server Administration

Importing Existing Keys and SSL Certificates Into Apache Tomcat

I rarely use Tomcat, but one of my clients is a Java guy and, as makes logical sense, uses Tomcat to serve the applications he writes. One of which required an SSL certificate. It’s no problem to create a new key, CSR, and import the certificate and certificate authority chains, but what if we already have an existing key and certificate for the same domain?

In our case, we had Apache serving the non-application stuff (in PHP, natch) on ports 80 and 443, with Tomcat on 8000 and 8443 (take that, Plesk!), and already had a certificate issued for the domain on the Apache side. Since the stuff used by Apache was in PEM format, I’ve added one of the steps required to convert it to PKCS12, which is what we’ll use for the Java keystore. These instructions are taken from a CentOS box, so you may need to make some modifications for other operating systems. It’s only here to serve as a guideline (and for my own future reference, primarily, because I know damned well I’ll forget again next year).

First, we need to concatenate the key, certificate (granted us by the CA) and the CA bundle into one single file. This is done most simply like so:
cat your_domain.key your_domain.crt your_ca_bundle.crt > your_domain.key_crt_bundle.pem

Next, we convert the concatenated PEM data into PKCS12:
openssl pkcs12 -export -out your_domain.key_crt_bundle.p12 -in your_domain.key_crt_bundle.pem

Create a password for the resultant PKCS12 file, and remember the password for a moment. Because you’ll need it when you import this PKCS12 into your Java keystore using the following command:
keytool -importkeystore -srckeystore your_domain.key_crt_bundle.p12 \
-srcstoretype pkcs12 -destkeystore your_domain.key_crt_bundle.jks -deststoretype jks

You’ll need to create a new password for the keystore, and then enter the password for the PKCS12 you created two steps back.

Then, edit your Tomcat server.xml file and define the full path and filename of the newly-created keystore, as well as your keystore’s password. In our case, the default location was /etc/tomcat6/server.xml. If you don’t know how to configure Tomcat6 for SSL at all, that’s beyond the scope of this particular post, and you will need to do some research. Also, do not pass GO!. Do not collect $200. And may God have mercy on your soul.

Finally, restart Tomcat doing the good ol’-fashioned service tomcat6 restart (or equivalent), and you should be good to go. And, if not…. sucks to be you.

Ubuntu Not Recognizing Changes To /etc/hosts

ah-ha

A moment ago, I finally figured out why changes to /etc/hosts on my local Ubuntu desktop were not being honored. In the past, it worked just fine, as expected, but this morning, it refused to recognize changes. I searched all over the web and found lots of people with the same problem, but no solutions. Plenty of helpful suggestions, mind you, but nothing would work for the folks who tried them. So, the solution? My NSCD was caching it. Perhaps there was a default value change recently, or maybe I just somehow never noticed it before because I’d add the entry prior to trying to work with the host. Not sure the ultimate reason, but the fix is in:

sudo vim /etc/nscd.conf

Change:
enable-cache hosts yes
…. to:
enable-cache hosts no

And then restart NSCD:

sudo service nscd restart

Voila! Finally, I can get on with my work for the day.

Horribly Slow Speeds On USB Stick, Ubuntu 12.04LTS (100KB/s?!?)

23426115i_01

I just finished building a new server for the house here and downloaded the latest build of Ubuntu Server 12.04LTS. My desktop is running an upgraded version of the same (but Desktop, not Server edition). Trying to create a USB boot disk to install on the new box was painfully slow: it was going to take 2.5 days.

After searching all over the web to see what others thought, checking the USB settings in my BIOS, and even rebooting for the sake of a potential fix chalked-up to voodoo, I realized the answer. Checking the USB stick’s partition, it was – unsurprisingly – FAT32. Once I dropped the partition (the stick was brand-new, just opened the package) and created a new ext4 partition in its place, I created my new USB boot disk in 38 seconds. That’s much more like it.

Custom sudo Login Prompt: Confuse Your Coworkers and Friends!

obey_sudo_card-p137691123849347174bh2r3_400

Quick way to have fun with Bash and sudo on a boring day. Insert the following into your /etc/bashrc, /etc/bash.bashrc, or similar file (as is appropriate for your distro and version):


alias sudo='sudo -p "Congratulations, %p! You are the one-millionth user to attempt to sudo to %U! Enter your password to see what you've won. "';

More info, from man sudo:

%H  expanded to the host name including the domain name (on if the machine's host name is fully qualified or the fqdn option is set in sudoers(5))
%h  expanded to the local host name without the domain name
%p  expanded to the name of the user whose password is being requested (respects the rootpw, targetpw and runaspw flags in sudoers(5))
%U  expanded to the login name of the user the command will be run as (defaults to root unless the -u option is also specified)
%u  expanded to the invoking user's login name
%%  two consecutive % characters are collapsed into a single % character
The prompt specified by the -p option will override the system password prompt on systems that support PAM unless the passprompt_override flag is disabled in sudoers.

Announcing the Release of the System Detonation Library for PHP

As discussed somewhat at length in a rapidly-devolving thread on the PHP General mailing list, I am in favor of a function that, when called, will initiate on the host system a self-destruct sequence.  Well, being a nice, sunny, spring Friday morning, I decided to offer just that:

Introducing the first public release of the System Detonation Library for PHP.

This useless extension provides one function with one purpose: to cause your server to explode.  Due to the obvious hazards involved, including (but not limited to) loss of hardware, limbs, and potentially life and liberty, this has only been tested on one single occasion, using a PC with Ubuntu 10.10 and a heavily-modified SVN version of PHP 5.3.6.  Thankfully, as the test was successful, there were no serious injuries.

Firstly, you may download the package here.

Second, as a very basic course on the compilation and installation of this unofficial PHP extension, here are some simple instructions for Linux users.  All others are on their own, and this may (read: probably) will not work anyway…. which is a shame, because I know plenty of Windows boxes that should have the right to self-destruct as well.

  1. Download the package above.
  2. Extract it: tar -zxf detonate-0.2.tar.gz
  3. Change to the newly-created directory where the files are located: cd detonate-0.2/
  4. Build the wrappers for your version of the Zend/PHP API: phpize (NOTE: on Ubuntu-built packages, this command may be: phpize5)
  5. Build the necessary makefiles for your system: ./configure –with-detonate
  6. Compile the code: make
  7. Install the binary (as root, or using sudo): make install
  8. Edit your php.ini to load the newly-installed extension by adding this line: extension=detonate.so
  9. If you plan to use it via the CLI, you’re done.  For use on the web, remember to reload/restart your web server.
  10. Create a basic PHP script with the following: <?php detonate(); ?>
  11. Check your insurance coverage.
  12. Run the script created in Step #10.

And that’s all there is to it.  Feel free to install this on all of your systems and use it as a replacement for exit or die() in your scripts.  Because, unlike die(), this function will absolutely get the point across, once and for all.

Windows Server Says, “Network Cable Unplugged” When It’s Not?!?

Once again, stuck managing a Windows box. Yeah, I know, I’ll whine, bitch, moan, and cry you a river another time.

The Problem: Using the secondary NIC (PNET/VLAN), I found a lock of packet collision during negotiation, handshaking, and identification, causing Windows to give up and basically say, “well, since it’s not working, the cable must physically have been removed, because there’s no way I could ever be wrong.”

Wro…. err…. incorrect, Windows. (You’re wrong.)

The Discoveries: The truth was, at least in my case, that it wasn’t properly handling the gigabit capabilities of the card on the box. I’m not the administrator for these machines (though they’re housed in our datacenter), so I can’t be certain that nothing had changed recently, but their staff said nothing at all had been modified. Perhaps that really was the case, and nothing had been changed — Windows has been known to do stranger things than this, of course, sometimes out of the blue.

The Solution (for my case): Go to the screen where you can view your network adapters (your version of Windows dictates the path of navigation, hence the ambiguity). Next, right-click the adapter with the “Network Cable Unplugged” message and click “Properties.” Click the appropriate button to configure the network adapter. Then click the tab on that dialog for “Settings” or something of the like (sorry, but I logged out in a hurry, so this is from memory), and you’ll see a list of parameters on the left, with their values on the right. Find one related to speed and duplex, and if you see it set to “Auto” or similar, drop it to “100Mbps Full Duplex” and click OK. Close the properties dialog by clicking “OK” and see if the settings are already bringing the network adapter back online. If not, disable and re-enable the adapter, and – if it was indeed the same issue – you should be back online within a few seconds.

Distributing php.net’s Synchronization Infrastructure

Several days ago, the primary server hosting all of the data comprising the php.net site for synchrony with all of the mirrors around the world became completely inaccessible. Due to security policies with the provider hosting the server, it was some time before we were able to have the machine returned to normal operational status. As a result, network content became stale, and automated tests on the mirrors saw them as outdated and deactivated them. It pointed out a flaw that, though this time was just an inconvenience, has the potential to grow into something more serious – including a sort of self-denial-of-service, if you will, if it went unnoticed for several days and all mirrors were seen as outdated.

Mark Scholten from Stream Service, an ISP based in the Netherlands and provider of an official mirror for their countrymen at nl.php.net, offered to put up a second rsync server, which gave me an idea: take the load off the primary server by distributing it across three regions.


(Click the image to view the full size version.)

Mark set up the European (EU) box in their Amsterdam datacenter, we (Parasane) had already set up an emergency rsync mirror in case the primary dropped out again which would be repurposed for the Americas (AA), and I contacted Chris Chan at CommuniLink in Hong Kong for what would become the Asia-Pacific (AP) region. Chris had submitted an application to the official waiting list to become an official PHP mirror back in February of 2010.

Compiling data over the course of the last 12 months from mirrors in our network which had it readily available, accurate, and up to date, I drew out a plan for the regions so as to limit the load and stress on each new mirror. Thus, the tri-colored map above. I also learned in the process that we will have served roughly 223 gigabytes of data over HTTP, network-wide, by the end of January, 2011, which averages out to about 1.9GB per mirror, per day, with the 115 active mirrors we have worldwide as of right now.

Setting myself an arbitrary date of 30 April, 2011, the goal is to have all existing official mirrors flipped over to using the rsync server designated for their country. Visitors to php.net should see no difference and should experience no negative impact during the transition, but the benefits will be great: far less of a likelihood of a mirror being automatically dropped from rotation due to stale content; the ability of the maintainer to decrease the amount of time to synchronize their mirror to hourly, providing the freshest content as it becomes available; less latency and greater speeds for many of those who are far from the current central rsync server; far, far less stress on our own network.

The immediate goal is ensuring that there are no snags, and that we can successfully synchronize all of the data to the website mirrors without omission. Beginning right away, I’ll be coordinating privately and directly with a few mirrors around the world to beta test the new layered design according to the rsync distribution plan. By 12 February of this year – a bit more than two weeks from now – I hope (and expect) to have all of the kinks straightened out. After that, we’ll begin migrating the rest of the network in its entirety to the new design.

All new mirrors from that point forward will be instructed to use their local rsync mirror as well, as defined by the map above.

It’s no large task, of course, but I’m hoping that the addition of just three new servers will help to ensure the health and stability of the network as a whole for years to come. While I don’t expect anyone to notice any difference – good or bad – in the user experience, behind the scenes I think we’ll not only see some differences in operations, but also begin to come up with even more ways to improve performance in the future.

(Finally) Announcing the Public Release of FileConv for PHP

Almost exactly two years ago, on New Year’s Day, 2009, I sent an email describing a new PHP extension I’d finished, and was interested in submitting to the PECL repository. The package, entitled FileConv, would natively handle file conversion for line endings, back and forth, between *NIX (LF: \n) and DOS (CRLF: \r\n). At that time, Ilia Alshanetsky recommended that I also add Mac conversion (CR: \r). Legacy MacOS, that is, prior to becoming a sort of ClosedBSD, if you will.

Somehow, as often happens in our busy lives, I forgot to follow through with submitting it to the PECL repo. Last night I was using one of the functions and found a datestamp bug, where – in certain situations – it would give the converted file a date approximately 430,000 years in the future. That’s actually almost 253-times the estimated duration for the complete decomposition of a modern PC, which is figured to be a paltry 1,700 years. That said, once I patched the file and recompiled, I was reminded of my discontinued effort to release the code to the public as an open source package. Well, time to change that, I suppose.

So today, you can download the FileConv extension right here:
FileConv-2.2.6.tar.bz2 (7,073 bytes)
FileConv-2.2.6.tar.gz (6,636 bytes)
FileConv-2.2.6.zip (10,531 bytes)

MD5 Hashes:
– d6200f0693ae63f9cc3bb04083345816 FileConv-2.2.6.tar.bz2
– c2b0db478628e0a4e2ce66fb06d19593 FileConv-2.2.6.tar.gz
– b3ff103424e4be36151a1c5f9cadd58d FileConv-2.2.6.zip

SHA1 Hashes:
– 0521fdeaa8bfb250c8c50bc133b355872fa70cad FileConv-2.2.6.tar.bz2
– 08e2c361fc41f925d0b4aa3a0bbdd7e0884b24d6 FileConv-2.2.6.tar.gz
– 9eb9355555dd8e6e6b6b7f3dc7464c7a6107b187 FileConv-2.2.6.zip

Keep in mind: this has only been tested on Linux (CentOS, Mandriva, and Ubuntu), as I have neither the ability nor desire to play on Windows (but feel free to try it). In a future release, the code will be compacted more, as well; right now, every conversion function has its own function within the source. This isn’t necessary: it could be a single master function with a simple switch for one small section of the code. I’ll get to that another day, when I have some time to hack it up again.

This distribution comes with a very simple automated installer for now. If/when it moves to PECL, that will be phased out, of course, as PECL will handle that itself. If you have root/sudo access on the box, you can just run ./install.sh from the package directory and follow the instructions from there. Manual installation instructions are included as well.

This package provides functions that somehow never made it into the PHP core: dos2unix(), unix2dos(), mac2unix(), mac2dos(), unix2mac(), and dos2mac(). It does not, however, do any checking or validation prior to conversion. If you decide to use this library, I’d highly recommend employing some basic checking in your code. Something like this should be used at a minimum:

<?php
function get_info($filename) {
  if (!function_exists('version_compare') || version_compare(phpversion(),'5.3.0','<')) {
    return trim(`file {$filename}`);
  } else {
    $finfo = finfo_open();
    $format = finfo_file($finfo, $filename);
    finfo_close($finfo);
    return $format;
  }
}

if (strpos(strtolower(get_info($filename)),' crlf line')) {
    // File is DOS (\r\n)
} elseif (strpos(strtolower(get_info($filename)),' cr line')) {
    // File is legacy Mac (\r)
} else {
    // File is *NIX (\n)
}
?>

NOTE: this does not ensure that it is a text file. You are strongly advised to address that as well. The included test.php file has a line that checks to see if the file is binary or text, so feel free to plagiarize that — or, better yet, build a better mousetrap.

If you come across any bugs/issues/whatever, let me know.

Skype and Google Earth Causes X To Crash On Ubuntu 10.10

[UPDATED 19-JAN-2010 – Thanks to Drew (in the comments) for bringing up the fact that this is only for 64-bit versions of Ubuntu. The filenames would indicate that, but no sense wasting your time if you’re looking for a 32-bit solution. Well, at least not yet. I may do a 32-bit build if there’s a need, but it seems as though the official repos may now have the patched versions. Have you gotten an official solution that resolved the issues? Feel free to let me know in the comments.]

After months of dealing with the mouse getting stuck between monitors, blinking like crazy and freezing all but remote SSH administration of my Ubuntu 10.04 (Lucid) desktop with triple-head monitor setup, I gave up and upgraded to 10.10 (Maverick) in hopes that it would fix the issues. I didn’t know if it did or not, because it introduced new errors. Worst of all: any time I would launch Skype, the screens would go black and X would crash in a segfault and restart. The same was true of Google Earth and of at least all Qt applications on the desktop. It took a good thirty-six hours before I traced everything back and came up with a solution. So now I’m running 10.10, which not only has a couple of minor improvements, but also seems to have finally fixed the mouse-locking issue. Hooray!

My issue turned out to be rooted in an issue with Xinerama on X with multiple monitors on an x86_64 box running the final stable of Ubuntu 10.10 (Maverick). If you have the same issues (Skype crashes X), try downloading the following file (routed through my company’s URL service so that it’s easier to share):

http://links.parasane.net/fvsq

The filename is xorg_crash_fix_debs_and_NVIDIA_driver_x86_64.tar.bz2, with the following hashes:

MD5: fe2fa5684a0f051d552bd7d0b4ee6f6a
SHA1: 0edea79d4832ce31954e29991405a67403732639

Applying it is simple (provided you have experience in knowing how to resolve your own dependencies, if any are missing). If you’d like to nip it in the bud before getting started, here’s an all-inclusive list of all packages of which I’m aware that you should have installed or which may be needed to finish this process without errors (feel free to pick and choose on your own, if you’re more comfortable doing a minimalist installation):

sudo apt-get install debhelper quilt bison flex xutils-dev x11proto-bigreqs-dev x11proto-composite-dev x11proto-damage-dev x11proto-xinerama-dev x11proto-randr-dev x11proto-record-dev x11proto-render-dev x11proto-resource-dev x11proto-scrnsaver-dev x11proto-video-dev x11proto-xcmisc-dev x11proto-xf86bigfont-dev x11proto-xf86dga-dev x11proto-xf86vidmode-dev x11proto-dri2-dev libxfont-dev libxkbfile-dev libpixman-1-dev libpciaccess-dev libgcrypt-dev nettle-dev libudev-dev libselinux1-dev x11proto-xf86dri-dev x11proto-gl-dev libxmuu-dev libxrender-dev libxi-dev x11proto-dmx-dev libdmx-dev libxpm-dev libxaw7-dev libxmu-dev libxtst-dev libxres-dev libxv-dev libxinerama-dev devscripts build-dep xserver-xorg-core

The steps to installing the fixed binaries are:

  • Drop to an alternative TTY prompt: Press CTRL+ALT+F1
  • Download the package file: wget http://links.parasane.net/fvsq -O xorg_crash_fix_debs_and_NVIDIA_driver_x86_64.tar.bz2
  • Uninstall your current NVIDIA drivers: sudo nvidia-uninstall
  • Decompress the file linked above: tar -xjvf xorg_crash_fix_debs_and_NVIDIA_driver_x86_64.tar.bz2
  • Change to the newly-created directory: cd xorg_crash_fix_debs_and_NVIDIA_driver_x86_64/
  • Install the core and common packages: sudo dpkg -i xserver-xorg-core_1.9.0-0ubuntu7_amd64.deb xserver-common_1.9.0-0ubuntu7_all.deb xvfb_1.9.0-0ubuntu7_amd64.deb
  • Set execution permissions on the included NVIDIA driver: chmod 0755 ./NVIDIA-Linux-x86_64-260.19.21.run
  • Execute the new NVIDIA driver: sudo ./NVIDIA-Linux-x86_64-260.19.21.run
  • Reboot the system: sudo shutdown -r now

You should now have a fully-working X system again. And if you upgraded because of the mouse-hang issues, you should be in good shape there, too!

NOTE: It should be VERY obvious, but this comes with absolutely no warranty or guarantee whatsoever, and you’re completely responsible for any issues that arise, directly and/or indirectly, from usage of these packages or instructions, et cetera. You know the drill by now, I’m sure.

SSH Client On Ubuntu Desktop Timing Out

It would happen again and again and again…. I’d walk away from the computer (yeah, on rare occasions that happens), or I’d flip to another terminal and get sidetracked there:

Write failed: Broken pipe

Son of a bitch! And why the hell don’t I remember to vi in screen until moments like this?!?

Well, unless I keep ‘top’ open or run a while [ 1 ]; do echo -n '';sleep 30; done, it continues to drop out without fail. And an interesting (to me) fact that I’ve actually recorded: I spend more than 60% of my day on the command line.

Logically, the first things I tried were to add /etc/ssh/ssh_config parameters for both KeepAlive and TCPKeepAlive, but that still had no positive effect. Then I started to dig deeper into the issue to see what other options I had. There were no network problems or abnormally-high numbers of dropped packets or shards, it would happen regardless of whether it was WiFi, 3G, or LAN cabled, and all other network services and applications were working just fine — including things like telephony, which was perfectly clear. I knew that it had to be a timeout issue, and since it wasn’t restricted to just a single server (or even to just thirty or forty servers, for that matter), nor was it an issue until I [finally] switched from Mandriva to Ubuntu, it had to be a local problem.

I dug and dug and dug, almost all the way to Virtual China, and finally found my Holy Grail:

ServerAliveInterval

Right now, I’m using ServerAliveInterval 120 and, for the first time since the issue reared its ugly head, I’ve been able to keep SSH sessions open and idle overnight. Hoorayings for Internets funs again and stuffs! Now maybe I can stop losing time on this and go back to only dealing with the issue of my mouse getting stuck between screens with Xinerama