Viewing Category:   [clear category selection]

YAWDLCSB: Yet Another Way to Disable Linux Console Screen Blanking

I wrote about this topic several years ago, but I'll be darned if I can find it. I run several Linux virtual machines using VMware Fusion, and I'd rather the console text stay visible for the purpose of identification and to see any console messages. By default, they will go blank after a few minutes. If you do a Google search for how to disable console screen blanking, you'll see that the most common recommendation is to add /usr/bin/setterm -blank 0 to the end of /etc/rc.d/rc.local. For some reason, that feels icky to me. Another way is to append the control characters to the end of the /etc/issue file, with /usr/bin/setterm -term linux -blank 0 >> /etc/issue.

For background, the function of setterm is to issue specific control character sequences to the terminal to cause the terminal to change its behavior. There is a database of terminal capabilities (/etc/termcap) that correlates with the terminfo function that setterm uses. When Linux boots, it starts up terminals identified in /etc/inittab. My installation of CentOS 5.5, and I believe many others as well, starts six /sbin/mingetty virtual console listening to /dev/tty[1-6]. When mingetty starts up, it echos the content of /etc/issue, unless given the -noissue argument.

I should also mention that I made the modification to the /etc/issue file while connected to the (virtual) machine over an SSH terminal. Therefore, my TERM environment variable was not neccessarily the same as the terminal identifier that the virtual console at boot. In fact, it isn't (xterm vs. linux). For this reason, it's neccessary to include the -term linux argument to setterm command so it's query of the terminal escape codes is correct. You can verify that indeed there is a difference by running /usr/bin/setterm -blank 0 | /usr/bin/hexdump when connected to both terminals; there is no escape sequence meaningful to xterm about screenblanking. After making the modification to /etc/issue I see that the bytes are appended to the file: /usr/bin/hexdump /etc/issue. Obviously, this will only take effect when each /sbin/mingetty is restarted. The easiest thing to do is reboot.

Installing FusionReactor 3 in ColdFusion 8 on CentOS 5

I document changes to web application cluster nodes to make the procedure repeatable, and for future reference. Recently I installed FusionReactor on CentOS 5 servers using the manual installation instructions. I couldn't use the installer program because these boxes don't run X11, and I prefer to do the installation from the command line anyway. I used the instructions from FusionReactor Installation Guide on the first attempt to install the software. After troubleshooting a few problems, I made explicit instructions for the installation.

Locate the current release of FusionReactor 3 for Linux in an RPM package; see FusionReactor Manual Installation section on the downloads page. Grab the URL for the download, and perform the following:

# Download the RPM and verify wget http://www.fusion-reactor.com/fr/FusionReactor-Download-Link md5sum FusionReactor.rpm rpm -Uvh FusionReactor.rpm # Stop Apache and ColdFusion service httpd stop service coldfusion_8 stop # Modify FusionReactor permissions chown nobody $FR_HOME chown -R nobody $FR_HOME/html chown -R nobody $FR_HOME/etc chown -R nobody $FR_HOME/instance # Configuration for defaults CF8_HOME=/opt/coldfusion8 FR_NATIVE_LIBS=$CF8_HOME/lib FR_JAVA_LIBS=$CF8_HOME/runtime/servers/coldfusion/SERVER-INF/lib FR_HOME=/opt/fusionreactor FR_PREFS=com/intergral JAVA_SYSTEM_PREFS=/etc/.java/.systemPrefs # Copy Java and native files if [ ! -d $FR_JAVA_LIBS ]; then mkdir $FR_JAVA_LIBS fi cp $FR_HOME/etc/lib/fusionreactor.jar $FR_JAVA_LIBS cp $FR_HOME/etc/lib/libFusionReactor.so $FR_NATIVE_LIBS # System Java prefs if [ ! -d $JAVA_SYSTEM_PREFS/$FR_PREFS ]; then mkdir -pm 777 $JAVA_SYSTEM_PREFS/$FR_PREFS fi chmod 777 $JAVA_SYSTEM_PREFS

The next part of the installation requires copying a chunk of XML from the FusionReactor source into the ColdFusion server configuration:

# Copy the <filter/> and <filter-mapping/> vim $FR_HOME/etc/conf/fusionreactor-web.xml $CF8_HOME/runtime/servers/coldfusion/SERVER-INF/default-web.xml

The firewall must be updated to allow HTTP access to the FusionReactor interface. That making the change, Apache and ColdFusion can be brought up.

# Update firewall # -A RH-Firewall-1-INPUT -m tcp -p tcp --dport 8088 -j ACCEPT vim /etc/sysconfig/iptables service iptables restart # Start Apache and ColdFusion service coldfusion_8 start service httpd start

Immediately login to the FusionReactor and change the password for the Administrator account; disable the Manager and Observer accounts. Viola! Please let me know if this is useful to you.

Nagios Remote Monitoring

Nagios is awesome. I recently configured a VPS to monitor all the servers in a hosting facility. Although I've been using Nagios for several years, I hadn't used the NRPE plugin until now. It's quite easy to add support on the Nagios server and the monitored host (both CentOS 5) using packages from the RPMforge repository, which includes all of Dag Wieers' builds. I'll document the process here.

Although not required, CentOS recommendeds using the Priorities Yum plugin to prevent conflicts when using third-party repositories. I haven't experienced conflicts requiring repository prioritization, however it does seem like a good facility.

Install the RPMforge repository package. Download the appropriate version for the system architecture (i386 or x86_64) directly, or browse all releases (DAG RPMs). With the RPMforge repository available, install the Nagios NRPE plugin on the monitoring server:

yum install nagios-plugins-nrpe

On the host to be monitored, install the NRPE daemon and plugins:

yum install nagios-plugins nagios-nrpe

The package includes configuration file (/etc/nagios/nrpe.cfg) with verbose comments to explain its settings. The following is a simple file to get started measuring some basic system health metrics.

# $Id$ pid_file=/var/run/nrpe.pid server_port=5666 nrpe_user=nagios nrpe_group=nagios command_timeout=60 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_disk_root]=/usr/lib/nagios/plugins/check_disk -w 20 -c 10 -p /dev/mapper/vg.01-root command[check_disk_var]=/usr/lib/nagios/plugins/check_disk -w 20 -c 10 -p /dev/mapper/vg.01-var command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200

Enable and start the NRPE daemon on the host to be monitored:

chkconfig nrpe on service nrpe start

Finally, modify the firewall to allow the Nagios server to request data from its IP of, say, 10.1.1.120:

-A INPUT -m state --state NEW -m tcp -p tcp -s 10.1.1.120 --dport 5666 -j ACCEPT

On the Nagios server, create a configuration to monitor the remote host. Here's an example snippet:

# $Id$ # vim: ts=4 define host{ use remote-host host_name server address 10.1.1.60 } define service{ use nrpe-service host_name server service_description Current Load check_command check_nrpe!check_load } ...

That's about all it takes to start processing remote Nagios plugins through the NPRE. Enjoy.

Update: The contrib/check_mem.pl plugin didn't work correctly. I suspect the format of the data returned by vmstat has changed since it was written in 2002. Not surprising. I could have hacked on it myself, but I found a replacement Perl script at Unixdaemon that does the job. It pulls data from free (from the sameprocps package).

Time Zones and Linux

Today I learned a bit about how Linux, specifically Red Hat Enterprise Linux, knows about the local time zone of the server. The GNU C library (glibc and glibc-common) includes localization information. It also provides a tool to chose a time zone environment variable /usr/bin/tzselect. However, it doesn't include the actual time zone data appropriate for use by programs at runtime. The time zone data comes from tzdata (who would have guessed?). It installs files for every time zone into /usr/share/zoneinfo arranged by location. The system expects /etc/localtime to be a copy or symlink of the appropriate time zone file. For me, it's /usr/share/zoneinfo/America/Los_Angeles. With that in place, my system now uses PST instead of UTC. Red Hat also provides fancy tools for choosing the time zone. The server I'm working on currently, however, is a minimal installation and doesn't have their system-config-date package installed.

As I create leaner and leaner virtual CentOS installations, it's handy to know how to properly configure the system by hand, rather than relying on programs that expect a user interface.

Updating Subversion Repositories to Berkeley DB 4.3

While copying various old repositories to a new server, I found that some of them needed to be updated to the current version of Berkeley DB. If the new svnadmin is used on an old repository, it outputs an error like so:

svnadmin: Berkeley DB error for filesystem 'project/db' while opening environment: svnadmin: DB_VERSION_MISMATCH: Database environment version mismatch svnadmin: bdb: Program version 4.3 doesn't match environment version

The new server is CentOS 5.2, which comes with Subversion 1.4.2, using Berkeley DB 4.3. The system had compat-db-4.2.52 installed, providing /usr/bin/db42_*. To perform the upgrade I needed to install the db4-utils package. After searching the interwebs for solutions, I cobbled the following script together:

#!/bin/bash REPOS=${1:-_} if [ "$REPOS" == _ ]; then echo "Usage: `basename $0` repository" exit 1 fi if ! echo "$REPOS" | grep -qE '^/'; then REPOS="$PWD/$REPOS" fi if [ ! -d "$REPOS" ]; then echo "Error: $REPOS does not exist." exit 1 fi if [ ! -d "$REPOS/db" ]; then echo "Error: $REPOS does not look like a Subversion repository." exit 1 fi if [ ! -z `lsof -t +D "$REPOS"` ]; then echo "Error: there seem to be open files in the repository." exit 1 fi read -n 1 -p "About to update repository to Berkely DB 4.3. Continue? (y/n): " echo if [ $REPLY != "y" -a $REPLY != "Y" ]; then exit 2 fi cd $REPOS/db /usr/bin/db42_checkpoint -1 /usr/bin/db42_recover /usr/bin/db42_archive /usr/bin/svnlook youngest .. /usr/bin/db_archive -d /usr/bin/svnadmin verify .. echo "Finished. File permissions have changed."

Obviously, you'd want to make a complete backup of your current repository before trying the upgrade. It's worked on the repositories that I've used it on so far. Even for a small repository, it can take a while to run. Remember that if the repository is being accessed through mod_dav_svn, the permissions will need to be set back to the Apache user and group after the upgrade.

Configuring a Production Open BlueDragon Server

I've just finished building up a couple production servers to host web applications. The servers are Xen guests on an AMD Quad-Core Opteron x86_64 host. The VPS template is a minimal installation of CentOS, to which I added packages as needed. The release of Sun Java 1.6u12 came out just as I was writing this, so these instructions will need to get updated slightly when JPackage has a new RPM (more on that later). Both Matt Woodward and Dave Shuck recently wrote about configuring CFML engines with Tomcat. The installation I'll describe is somewhat similar.

  • CentOS 5.2
  • Tomcat 5.5.23 (tomcat5-5.5.23-0jpp.7.el5_2.1)
  • Apache 2.2 (httpd-2.2.3-11.el5_2.centos.4)
  • Sun Java 1.6u11 (java-1.6.0-sun-1.6.0.11-1jpp)
  • Sun JavaMail 1.4.1
  • Open BlueDragon 1.0.1

The installation of packages using yum is a snap, however there was an issue with the architecture detection. There is a simple workaround, to hard-code i386 as the basearch:

sed -i -r 's/\$basearch/i386/g' /etc/yum.repos.d/CentOS-Base.repo

The procedure is to install jpackage-utils, then download and repackage the Sun Java SE Development Kit 6 (jdk 1.6) using the JPackage Project non-free nosrc RPM. I install some, but not all of the, resulting RPMs:

yum --nogpgcheck localinstall java-1.6.0-sun-1.6.0.11-1jpp.i586.rpm java-1.6.0-sun-devel-* java-1.6.0-sun-fonts-*

The CentOS Wiki has a thorough article on installing Java on CentOS. I've considered using OpenJDK, but I don't know what sort of compatibility issues that would raise.

The Tomcat server starts up just fine with GNU's version of the Java runtime (libgcj and java-1.4.2-gcj-compat). However, using the GNU version of JavaMail (classpathx-mail) instead of Sun JavaMail, the following chunk of CFML will fail with a javax.mail.NoSuchProviderException exception from within the Open BlueDragon web application:

<cfscript> server = "localhost"; port = 25; username = ""; password = ""; mailSession = createObject("java", "javax.mail.Session").getDefaultInstance(createObject("java", "java.util.Properties").init()); transport = mailSession.getTransport("smtp"); transport.connect(server, JavaCast("int", port), username, password); transport.close(); </cfscript>

Open BlueDragon does include include the correct Jar, but the JVM that Tomcat configures loads the system version first. Rather that muck about with the classpaths, I downloaded the current version of JavaMail, extracted mail.jar, and created alternatives link:

unzip -j -d /tmp javamail-1_4_1.zip javamail-1.4.1/mail.jar mv /tmp/mail.jar /usr/share/java/javamail-1.4.1.jar alternatives --install /usr/share/java/javamail.jar javamail /usr/share/java/javamail-1.4.1.jar 5000 alternatives --auto javamail file /var/lib/tomcat5/common/lib/\[javamail\].jar

Tomcat installs a set of symlinks to /usr/share/tomcat5. Configuration files are placed in /etc/tomcat5. For this installation, I use a stripped-down version of server.xml that provides web application hosting on a per-user basis.

<Server port="8005" shutdown="SHUTDOWN"> <GlobalNamingResources /> <Service name="Catalina"> <Connector port="8080" address="127.0.0.1" protocol="HTTP/1.1" /> <Connector port="8009" address="127.0.0.1" protocol="AJP/1.3" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" debug="0" /> <Host name="localhost-username" appBase="/home/username/webapps" unpackWARs="false" autoDeploy="false" debug="1"> <Context path="" docBase="openbd" allowLinking="true" caseSensitive="true" swallowOutput="true" /> </Host> </Engine> </Service> </Server>

The standard Tomcat configuration has a single Host within an Engine named Catalina. I've added a second Host that is specific to a system user username, which allows each user on the system to manage their own deployed web applications and choose their own root Context. Installing Open BlueDragon as the default web application simplifies the Apache HTTP configuration.

The username user has an Apache HTTP configuration file in /etc/httpd/conf.d/username.conf with mod_rewrite rules to proxy all requests for CFML files to the Tomcat HTTP Connector. I had intended to use the AJP Connector with mod_proxy_ajp, but there is a problem with the the proxy request not specifying the proper hostname. There might be a solution to that issue, but I haven't found it yet. The plain mod_proxy_http module works properly in the following configuration:

<VirtualHost *:80> DocumentRoot /home/username/websites/sitename ... RewriteCond %{SCRIPT_FILENAME} \.cfm$ RewriteRule ^/(.*)$ http://localhost-username:8080/$1 [P] </VirtualHost>

The rest of the Apache HTTP configuration handles web requests for flat files, served from ~/websites/sitename. The CFML files can be placed in ~/webapps/openbd, however an easier deployment is to place everything in ~/websites/sitename (like you would with a typical ColdFusion server). Symbolic links can be added for directories containing CFML. Consider the following:

cd ~/webapps/openbd ln -s ../../websites/sitename/MachII MachII

It would probably be a good idea to set the Open BlueDragon root mapping appropriately. There are a few issues with file ownership and permissions that I didn't address above. I've added username to the /etc/sudoers file, granting that user limited access.

CentOS 5.1 on VMware Server

I created a pretty awesome CentOS 5.1 virtual machine. It's quite lean, using only 256 Mb of RAM and a 2.0 Gb virtual disk. One issue that I encountered when doing the installation from a DVD (virtual CDROM using an ISO image), was that the installer boot kernel didn't have support for the virtual SCSI controller created by VMware Server 1.0.5. Apparently, my choice of OS (RHEL 4) when using the Create New Virtual Machine Wizard caused the VM to use a BusLogic controller. The fix was to edit the VMX file and add the following:

scsi0.virtualDev = "lsilogic"

For whatever reason, the scsi0.virtualDev was undefined, so I added the line, rather than editing an existing definition. The CentOS installation worked perfectly using the LSI Logic controller, and continues to function properly after the using the guest OS.

I see that there is an Open Source project to replace the proprietary VMwareTools package: Open Virtual Machine Tools. I would really like to use these, but I don't want to go through the effort of compiling them myself -- every time there is a kernel update. Hopefully they'll be added to a current repository soon. I installed the VMwareTools package, but since I'm not running X Windows on this VM and don't want shared folder support, I removed them.

While installing CentOS 5.1, I created a new kickstart script. It's a no-frills install: centos51.cfg. By the way, it will take more than 2.0 Gb of disk space for the installation using that kickstart script. On my first pass, the /var and /home logical volumes were so big that it didn't leave enough space on the root filesystem. When this happens, Anaconda presents an amusing error message:

Very funny, guys.