Debian Squeeze 6.0 Installation Over SSH

One of the great benefits of the Debian Installer is the ability to boot an ISO image, set up networking and complete an installation remotely via SSH (Secure Shell). You can use the following steps to get the installer launched.

Boot from the CD and in the Installer boot menu select “Advanced options >”

Select “Expert install”

The installer will load up and you will be presented with the Debian installer main menu.

If necessary set the default language and keyboard (you can reconfigure them later once you get this going over SSH if needed), and then select “Detect and mount CD-ROM”.

It then prompts you to load modules from USB storage, if you have drivers to load from USB you’ll want to accept. It then asks about PCMCIA resource range options, since our hardware didn’t require this we left it blank. Finally, if all goes well, you receive a confirmation screen saying that the CD-ROM detection was successful and that it contained the expected installation media.

The next option on the menu is “Load installer components from CD”, which you want to select. Browse the list, but for basic needs the only thing you need to load up is “network-console: Continue installation remotely using SSH”

Now you’ll need to get networking going. Select “Detect network hardware” and then “Configure the network”. In this step, in addition to basic networking, it will ask you to set a hostname and domain name.

Next you want to “Continue installation remotely using SSH” which will generate SSH host keys and have you set a remote installation password. Once it has these set up you will be presented with a screen giving you an installer@ipaddress location for the install and an SSH fingerprint. You or your remote technician will use these to SSH into the installer.

Finally, log in from your remote PC and complete the installation.

Note: It’s important to keep a solid connection established during the installation as the installer can behave poorly if you lose your connection and have to connect again. Also, try to avoid resizing the window while doing the install as redraws of the window to the new size can sometimes cause problems.

Posted by Elizabeth Krumbach in News, 6 comments

Slides for my talk on “Automating X11 Keystrokes”

X11 is the graphical user interface most widely used on Linux operating systems. My slides and video demo for a short talk given at the Philadelphia area Linux Users Group (PLUG) on March 2nd are on-line. The slides briefly cover xrandr (which can also be used to set the screen resolution), xset, xwd / xwud, xdotool, and xautomation including xte. You can get the slides and watch the video at my page on Automating X11 Keystrokes.

Posted by CJ Fearnley in FOSS Community, News, 1 comment

Finding Help in Ubuntu (SCALE, Feb 25-7 in LA); Ubuntu Diversity; Debian Squeeze AKA 6.0

Here are some news items:

• Elizabeth Krumbach will give a talk on “Finding Help in Ubuntu” at UbuCon in Los Angeles

Tomorrow, February 25th at 9AM, LinuxForce Remote Responder Administrator Elizabeth Krumbach will give the opening talk at UbuCon at the Hilton Los Angeles Airport Hotel. Her talk is entitled Finding Help in Ubuntu. UbuCon is part of SCALE: Southern California Linux Expo which runs 2/25-2/27 2011.

• Elizabeth Krumbach interviewed in Linux Pro Magazine

Linux Pro Magazine interviewed LinuxForce Remote Responder Administrator Elizabeth Krumbach in an article on Ubuntu Increasing Its Diversity. Elizabeth is helping with the effort to increase diversity at the Ubuntu Developer Summit in Budapest in May.

• A new stable version of Debian known as Squeeze or 6.0 was released

The Debian project announced the release of Squeeze on 6 February 2011. eWeek reviewed the new release. We have already upgraded 3 systems and installed several new systems running Debian Squeeze. It is very reliable as we’ve come to expect from Debian.

In addition, the Debian web sites received a new design, a new “look”, that was released together with Squeeze. Check it out at http://www.debian.org.

Posted by CJ Fearnley in Conference, FOSS Community, News, 0 comments

One way to migrate Xen virtual machines to KVM in Debian

There are dozens of virtualization products on the market. When we launched our first high-availability cluster in early 2008 we chose Xen due to it’s ability to run on non-virtualized hardware, support in Debian 4.0 (Etch) and general flexibility. We’ve learned a lot about the upstream of Xen, including the challenges that Debian maintainers face, and we were increasingly drawn to another free and open source virtualization technology, Kernel Based Virtual Machine (KVM). The primary downside to KVM is that it requires special CPU hardware support to run, but this hardware support is now almost ubiquitous on modern servers. KVM has the advantage of being supported upstream in the Linux kernel itself, removing the onus of difficult kernel patching from the Debian Developers and has become the supported virtualization option for Ubuntu, Fedora and Red Hat. Additionally, KVM allows the guests to run unaltered, meaning you don’t need a special kernel and can run many OSes, from Linux to FreeBSD to Windows 7.

We still work on a number of machines which lack hardware virtualization support, but as our customers upgrade hardware we’ve begun moving production virtual machines from Xen to KVM. In tackling the migrations of these production virtual machines we encountered several challenges, the major ones being:

  • In Xen, partitions were created in separate logical volumes on the host and mounted by Xen itself and as a result we didn’t require Logical Volume Manager (LVM) within our Xen guests, in KVM the logical volume on the host for a virtual machine is a single disk image, not separate partitions.
  • In Xen, the kernel package is not installed, in KVM it is required
  • In Xen, you don’t have an independent bootloader on the OS, in KVM you need one to boot

The first step was to create a partition table on the new KVM image which is identical to the one on Xen. We wanted to use LVM within the guest, which required a Matryoshka (Russian) doll approach. First we’d create a volume group on the host to give us the typical flexibility of LVM host-side, and then we’d create one on the disk image giving us the flexibility required within the guest itself to expand any partitions. Finally we’d need to bootstrap the new system and copy the files over.

One way to go about all of this is a manual process. This solution would allow for scripting the procedure but requires a significant investment of time to get everything right so there is the least amount of down time possible. Since we only have a half dozen of these VMs to migrate in total, we looked for some way which already existed for handling all these steps in a familiar way, which is when we looked to the Debian Installer. Assuming a local mirror of core Debian packages (recommended), we could install a skeleton system on a test IP address which was properly partitioned, had LVM configured and bootstrapped in less than 20 minutes. We could then take this skeleton system that we’re sure is an functioning, bootable system and move the files over from the Xen installs, arguably decreasing our risk with each migration and the downtime required.

To get started, you’ll first need to calculate the total size of the current VM and lvcreate a slice of that size on the KVM system, then launch the Debian installer against the image using virt-install, something like:

virt-install --connect qemu:///system -n guest.example.com -r 768 -f /dev/mapper/VolumeGroup-guest.example.com -s 12 -c /var/lib/libvirt/images/debian-506-i386-netinst.iso --vnc --noautoconsole --accelerate --os-type linux --os-variant debianLenny

In this example we assume the debian-506-i386-netinst.iso is in /var/lib/libvirt/images/ and we want 768M of RAM, the information you put here is similar to the information that you would have previously defined in your /etc/xen/guest.example.com.cfg file for Xen.

Then use virt-manager to connect (we actually connected from a remote desktop running virt-manager) to the running install session (the standard Debian installer does not provide serial access) and install Debian, you will need the root password to launch the installer. Proceed to install.

When you get to the step to partition the disks, lay out the partition table to be identical to the VM you want to migrate to it except put it on LVM, put /boot on a separate partition outside of LVM. Complete the install, including installing grub.

Confirm that the system will boot and works on a test IP address and make a copy of your /etc/fstab to the host system, you’ll need this later.

You now have a skeleton install of Debian which runs on KVM with the LVM partitions you need.

To begin the actual data migration, you’ll want to mount the volumes within your new KVM disk, this can be done with the help of a great little mapping tool called kpartx. To map and activate the volume group on the guest, following these steps:

kpartx -av /dev/VolumeGroup/guest.example.com
vgscan
vgchange -ay guest

In this example we assume the Linux Volume on the host is called “guest.example.com” and the Linux Volume within the guest is called “guest”.

Now that the host can see the Volume Group on the guest, and all the Volumes in /dev/mapper/ you’ll want to mount them.

Once mounted, you’ll be able to start your rsync. To incur the least amount of downtime, you’ll want to rsync the large data partitions (like /srv, /home, /var, perhaps /usr) while your production host is still running. Note: All rsyncs completed during this process must be done with the “–numeric-ids” option so the permissions are not inherited from the host!

While you’re rsyncing the data, you’ll want to go into your Xen system and install the following packages (installing these will no impact your Xen system, it doesn’t strictly use them so it will simply ignore them):

  • linux-image-2.6-686
  • grub
  • lvm2

When you’ve completed moving the largest portions of your Xen guest, bring down the production Xen guest (downtime starts now!) and mount the filesystems. And begin rsyncing the data over (preferably over a crossover cable for the fastest transfer, remember to use –numeric-ids in your rsync).

Once the rsync is complete, edit the following files on the KVM host:

  • /mnt/guest/etc/fstab – use the version you saved to the host in a previous step
  • /mnt/guest/etc/inittab – uncomment the ttyS0 line to allow for serial access from the KVM host for virsh
  • /mnt/guest/etc/udev/rules.d/70-persistent-net.rules – comment out eth0 line so eth0 can come up on the new KVM MAC address

Unmount the filesystems on both sides and on the KVM side disable the volume group and use kpartx again to unmap the filesystem from the host:

sudo vgchange -an guest
sudo kpartx -dv /dev/VolumeGroup/guest.example.com

You’re now ready to boot the VM on the KVM side. This can be done with virt-manager or virsh.

Since you just moved the machines to a new server, and probably new MAC addresses, you will probably need to run the arping command to reclaim the IP address of the VM and all its service IP addresses.

Some things to confirm are working:

  • Networking (confirm there are no lingering arp caching issues)
  • Email (where applicable, confirm system messages etc are being sent)
  • All services running (confirm key services, review monitoring dashboard, log in via ssh)
  • Confirm that you have contiguous logs

Now that we’ve completed one of these migrations we have a lot of ideas about how to improve the process, including the possibility of making the whole process more scriptable, but this quick method leveraging the Debian installer for easier disk configuration and bootstrapping worked very well.

Posted by Elizabeth Krumbach in Debian, Systems Management, Tech Notes, Virtualization, 0 comments

One way to bring Innovative FOSS and Linux solutions to your organization

Here is an effective way to try out Linux or FOSS (Free and Open Source Software) to help grow your organization. Hat tip to CIO magazine for

Adam Hartung’s short little InnovationZone article “Outsource for Growth”.

Mr. Hartung recommends outsourcing to grow your organization … to do new things … to innovate … to be more flexible. Since your IT staff is probably too busy to take on growth projects like these, it makes a lot of sense to use consulting experts, like LinuxForce, to develop, test, and provision innovative FOSS infrastructure for you. How could your organization benefit from the outsourcing for innovation approach to build out a FOSS or Linux-based solution to grow your business?

Posted by CJ Fearnley in Development, 0 comments

Licensing Considerations When Integrating FOSS and Proprietary Software

Recently, I was looking for resources to help explain the implications of integrating FOSS (Free and Open Source Software) with proprietary software. This question is important for any organization who might want to build an “embedded” or “dedicated” system or product which might include either their own proprietary software or third party applications. I discovered that most of the information available about FOSS licensing addresses this issue rather obliquely. This post will cover the basics so that such organizations can see how straightforward it would be to include FOSS in their projects. Standard disclaimers about my not being a lawyer apply.

Use of any software system requires understanding the software licensing involved. There are many dozens of FOSS licenses that could apply in your situation, so understanding the details is necessary to assess compatibility. In broad terms, these can be effectively summarized by the Debian Free Software Guidelines and The Open Source Definition. Any software that complies with those specifications will freely (in both the “freedom” and money senses of the word) permit running proprietary and FOSS applications on the same system. This is easier said than done. Since Debian has analyzed most common FOSS and quasi-FOSS licenses, their archive and their license page can be used to assess how the license might work in your situation (for example, anything in “main” would be OK for co-distribution and running in mixed environments). So we always start with Debian’s meticulous and well-documented analysis to assess any license.

From a high-level perspective, the requirements that FOSS licensing will impose on an organization wanting to include it with proprietary software will primarily be in the form of providing appropriate attribution (acknowledgement) and providing the source code for any FOSS software (including any modifications) shipped as part of an integrated system. Typically the attribution requirement can be satisfied by simply referencing the source code of the FOSS components. The source code requirement can be met by providing the source code for all of the FOSS distributed as part of the integrated system, for example, by placing it on a documented ftp site.

There are some subtle issues that may arise during the software development process if proprietary and FOSS code are “linked” together. In such cases it is necessary to ensure that code over which one wants to assert proprietary control is only combined with FOSS code that explicitly supports such commingling. So it is necessary for a company wanting to keep its code proprietary to keep it “separate” from the FOSS code used in the integrated system. The use of the words “linked” and “separate” is intentionally vague because the terms of each FOSS license will need to be examined by legal counsel to understand the precise requirements. In these situations, there are license compliance management systems that can be adopted to help ensure that this separation is maintained.

Those are the basic issues. The references below describe these and related issues involved with Linux & FOSS licensing in much more depth:

Posted by CJ Fearnley in Development, Systems Management, 0 comments

Some Initiatives Resulting From DebConf10

I attended DebConf10 in early August at Columbia University in New York City. I thought I’d document some of the initiatives that resulted from that event.

I attended the Debian Policy BoF. It inspired me to start reviewing Debian Policy and led to my submission of a bug report to improve the description of the archive areas in Debian. In the BoF (Birds of a Feather) session, Russ Allbery requested help from everyone including non DDs (Debian Developers) to make policy clearer and more descriptive, so I obliged. I see that the negotiated text has been accepted and included in the git repository for the next release.

During the Debian Science sessions that I attended (see especially the video from the Debian Science Round Table), the idea of engaging upstream providers (software projects that are packaged by integrators like Debian) more effectively was discussed. This led me to draft a proposal that I posted to the Debian Project mailing list: Improving coordination / support for upstreams. There was precious little feedback, but I think it is a profoundly important issue: how do we improve coordination of upstreams with projects like Debian that integrate their software. So far there is very little infrastructure or knowledge about this important issue in the management of FOSS (Free and Open Source Software). How do you think we should start to address this problem?

I helped with the herculean task of getting Sage, a FOSS mathematics system, back into Debian. After discussing the issue during the Debian Science track at DebConf10, I met Lucas Nussbaum in the hacklab and he (with help from Luca Falavigna) managed to get the old buggy version of Sage removed from unstable (apparently, this version was causing support issues for the upstream Sage community, so this was a positive step forward). I also submitted five bugs (#592349, #592354, #592425, #592426, and #592429) about new upstream versions that are needed. I posted two detailed reports of work needed on packaging Sage for Debian to appropriate mailing lists. Getting Sage into Debian is the kind of big FOSS management challenge that I’m excited about. But I will need a lot of help to make progress. Let me know if you are interested in contributing to the effort!

Finally, for the record, here are links to the sessions where I participated in the discussion (video is available): Bits from the DPL, SPI BOF, Enterprise Infrastructure BOF, Mathematical Software in Debian, Overall presentation of Debian Science, and Debian Science Round Table.

Posted by CJ Fearnley in Conference, Debian, 0 comments

Attending Debian Day and DebConf10 Next Week

Since I’ve been involved with Debian GNU/Linux for over 15 years, it is exciting that I will be able to attend the first two and a half days of DebConf10 including Debian Day from Sunday to Tuesday August 1–3.

I am particularly looking forward to the following sessions: Pedagogical Freedom: Debian, Free Software, and Education, Beyond Sharing: Open Source Design What are the challenges for the collaborative design process?, FLOSS Manuals: A Vibrant Community for Documentation Development, Bits from the DPL, The Java Packaging Nightmare, Collaboration between Ubuntu and Debian, How We Can Be the Silver Lining of the Cloud, Enterprise Infrastructure BOF How enterprise technologies such as Kerberos, LDAP, Samba, etc can work better together in Debian, Using Debian for Enterprise Infrastructure Stanford University: A Case Study, and more (see the schedule for each day).

I’m also hoping I can also attend on Thursday when the math and science focused sessions will be held, but I’ll have to see how next week’s schedule works out in the office. If you are coming to DebConf, I’ll see you there!

Posted by CJ Fearnley in Conference, Debian, 0 comments

Beyond the Cloud: The Comprehensive Flexibility of FOSS May Bring Clearer Skies

Last week’s InformationWeek has a good article on cloud computing, Cloud ROI: A Grounded View.  It seems that even with all the hype (or because of it?) most are not “running blindly” to adopt “the cloud”.  I must admit the cloud metaphor has a powerful poetic charm to it.  That is probably why it has grabbed the attention of so many over the past few years. Everything in our world is ephemeral, so there is an aptness to the concept of a “cloud”. Moreover, I too like and use cloud analogies. But I am now looking for clearer skies!  Here is a short list of my gripes about "the cloud":

  • What does “cloud computing” mean? It isn’t at all clear! Here is some data: CIO magazine cites a Forrester report that says "the number one challenge in cloud computing today is determining what it really is". CIO also reported on a McKinsey study that "found 22 separate definitions of cloud computing"! And that leads to my second point:
  • The word “cloud” is so … vacuous and amorphous …  “A cloud:  it looks like Zeus!” only to transform in shape before your very eyes “Wait, now it looks like Aphrodite!” … and then its gone!  Is this the kind of model people should entrust with their business data?  It has no structural stability:  inherently:  it is just rapidly moving gases … far out of reach … away in the sky. What kind of business model is that?
  • Although RADLab (Reliable Adaptive Distributed Systems Laboratory) has put out some interesting papers, I was a little surprised when I read their acknowledgements in the CACM article A View of Cloud Computing.  It reads like a who’s who in cloud computing: Amazon, Google, Microsoft, Sun, eBay, Cloudera, Facebook, and so on. The original Berkeley paper has a shorter list of cloud companies funding their work. I’m sure they are maintaining their academic integrity, but it does show that they are not wholly independent. Remember what Kitty Foyle said:

    I’ve taught myself a lesson, or I hope I have: when I find myself thinking something I stop a minute and ask myself, Now who had it all figured out beforehand that was the way they wanted me to think?
    — From Christopher Morley’s novel “Kitty Foyle

  • Although the Berkeley papers raise a number of very interesting issues, none of them requires vacuous meaningless jargon to further obscure the subtleties and complexities of emerging technology trends! So my final gripe is that the name “cloud” tends to obscure what is really important even when I agree with “the cloud thinkers”.

Perhaps the most important issue “the cloud people” are missing is what might be called comprehensive flexibility. As a user of software technology, I want my computing functionality everywhere … in every imaginable format. For example, I’d like to be able to use the software that I’ve invested the time to learn to be available on my desktop (32-bit, 64-bit, Mac, Windows, or Linux), and I’d like it to work whether the Net is working or not, on my cell phone and other portable devices (again with network or not), in the data center (clustered or not), in the WAN (Wide Area Network, note that the Internet is our shared, global WAN), perhaps distributed among several hosting providers, and perhaps even provided by “utilities” (to save the trouble of maintenance and scaling costs). But I think software should be so flexible that it can live in each of those environments. Talk about utility computing: wouldn’t software have so much more utility if it worked everywhere instead of being beholden to whatever your software provider offers or what hardware you happen to have in front of you right now?

Fortunately this type of flexible software does exist. It is called Free and Open Source Softare (FOSS) and it is becoming ubiquitous. In fact, whether you know it or not, you are using FOSS software: Apache, the FOSS web server, runs this web site and indeed the majority of all web sites. WordPress, the blogging software we use here is also “everywhere” and you can purchase it from “cloud” utility providers or install, run, and modify it yourself. The list of important FOSS software goes on and on and this blog is dedicated to helping elucidate its importance as well as the issues involved in managing it.

So I would argue that instead of letting our heads go to the “clouds” we need to ask how can we make software that works in all environments, on all hardware, and for all people? … how can we make software that is comprehensively flexible?

Posted by CJ Fearnley in FOSS Community, 0 comments

Please Document the Shop: On the importance of good systems documentation

We have all heard this: You need to document the computer infrastructure. You never know when you might be “hit by a bus”. We hear this and think many frightening things, reassure ourselves that it will never happen and then put the request on the back burner. In this article I will expand on the phrase “hit by a bus” and then look at the consequences.

Things do happen to prevent people from coming into work. The boss calls home. Talks to the wife and makes the sad discovery that Mike wont be coming in anymore. He passed away last night in bed. People get sudden illnesses that disable them. Car accidents happen.

More often than these tragedies occur, thank goodness, business conditions change without warning. In reorganizations whole departments disappear, computer rooms are consolidated and moved, companies are bought and whole workforces replaced. I have had the unhappy experience of living through some of this.

Some organizations have highly transient workforces because of the environment that they operate in. Companies located near universities benefit from an influx of eager young, upwardly mobile university graduates. These workers are eager to gain experience but soon find higher paying jobs in the “real world” further away from campus. These companies have real turnover problems. People are moving up so quickly, they don’t have time to write things down.

Even when you keep people in place and maintain a fairly stable environment, people discover that what they have documented in their heads can just fade away. This is getting to be more and more of an issue. Networks and servers and other such infrastructure functions have been around for 20 years in many organizations. Fred the maintainer retired five years ago. Fred the maintainer was transferred to sales. The longer systems are around, the more things can happen to Fred. Fred might be right where he was 20 years ago. He just can’t remember what he did.

What does all this mean? What are the consequences of losing organizational knowledge in a computer organization? To be blunt, it creates a hideous environment for your computer people. The system is a black box to them. They are paralyzed. They are rightfully afraid. Every small move they make can bring down the system in ways they cannot predict. Newcomers take much longer to train. Old-timers learn to survive by looking busy while doing nothing. The politics of the shop and the whole company is made bloody by the various interpretations of the folklore of the black box. He/she who waves their arms hardest rules the day. This is no way for your people to live.

This is no way for the computer infrastructure to live as well. While the games are played the infrastructure evolves more slowly and slowly. Before long the infrastructure is frozen. Nobody dares to touch it. The only way to fix it is to completely replace it at considerable expense. In elaborate infrastructures this is easier said than done. The productive lifetime of the platform is shortened. It was not allowed to grow and evolve to lengthen its lifetime. Think of the Hubble Telescope without all the repairs and enhancements over the years. It would have burned out in re-entry long ago.

Having made my case, I ask again; for your own good, please document the shop. Make these documents public and make them accurate. Record what actually is rather than what you wish it to be. It is better to be a little embarrassed for a short while than to be mislead later on. Update the documentation when changes occur. An out of date document can be as bad as no document at all. Make an effort to record facts. At the same time don’t leave out general philosophies that guided the design and other qualitative information because it helps your successors interpret the facts when ambiguities occur.

Think of what you leave behind. Persuade your boss to make this a priority as well. Hopefully the people at your next workplace will do the same.

Posted by Laird Hariu in Eternally Regenerative Software Administration, Security, Systems Management, 0 comments