Friday, May 9, 2008

XP SP3 Glitch a 'Gotcha' For IE7 & 8

By Stuart J. Johnston
May 7, 2008

Microsoft finally released Windows XP Service Pack 3 (SP3) to the general public, earlier this week, after a minor glitch or two the week before. It is an update that many XP users have been waiting impatiently for, for months.

Despite the fact that it's now available, however, the company still has a caveat for some users. If you have Internet Explorer 7 or 8 already installed, you may want to uninstall it before installing SP3. Then, if you wish, you can reinstall IE afterwards.

Why? As the 1990s buzz phrase goes: it's complicated.

At least that's the message in a posting made on Microsoft's (NASDAQ: MSFT) IE team blog this week.

It revolves around the fact that SP SP2 shipped with IE6. However, XP SP3 ships with a slightly different version of IE6. It also concerns the order in which the service pack and IE7 or IE8 are installed.

"If you choose to install XP SP3, Internet Explorer 7 will remain on your system after the install is complete. Your preferences will be retained. However, you will no longer be able to uninstall IE7," Jane Maliouta, deployment program manager for IE8, said in her blog post. The same goes for IE8, which is currently in beta test.

That's because the uninstallation process saves the wrong set of IE6 files on your hard disk, which would cause big problems later – so you're locked out of simply reverting to IE6.

The best way to handle the problem, Maliouta said, is to first, uninstall IE7, install XP SP3, and then reinstall IE7.

For the more adventurous who may have installed the beta test release of IE8, the warning counts double. Microsoft has set its download sites to not offer SP3 to users who already have IE8 installed – for good reason. If you install SP3 on top of IE8, as with IE7, you will no longer be able to uninstall the beta software.

"Since people are more likely to uninstall beta software, we strongly recommend uninstalling IE8 Beta 1 prior to upgrading to Windows XP SP3 to eliminate any deployment issues and install IE8 Beta 1 after XPSP3 is on your machine," Maliouta added.

Two analysts said they don't view the situation as a significant problem, but one said that it makes the update process more complex than it should be..

"I suppose there could be some applications that are affected, but I don't see it having any impact on most users," Michael Cherry, lead analyst for operating systems at researcher Directions on Microsoft, told InternetNews.com.

Roger Kay, president of analysis firm Endpoint Technologies, was of a similar mind.

"It sounds like a glitch [Microsoft] needs to fix, but it doesn't sound like a big deal," he said. "Still, a user shouldn't have to go through a lot of work to get it fixed," Kay added.

The company had planned to release XP SP3 last week, but that fell through after Microsoft found a clash between the service pack and Microsoft's Dynamics Retail Management System (RMS).

Microsoft announced on Monday it had put a filter in place so that XP SP3 is not offered to users with RMS installed. Then it released the service pack as planned. The company is working to come up with a solution for RMS users.

Thursday, May 8, 2008

What can you do with a second Ethernet port?

By: Nathan Willis

Purchase a new PC or motherboard soon, and the chances are good that it will come with two built-in network interfaces -- either two Ethernet jacks or one Ethernet and one Wi-Fi. Tossing in a second adapter is an inexpensive way for the manufacturer to add another bullet point to the product description -- but what exactly are you supposed to do with it? If you are running Linux, you have several alternatives.

Plugging another Ethernet cable into the second jack and hoping for the best will accomplish nothing; you have to configure Linux's networking subsystem to recognize both adapters, and you must tell the OS how to use them to send and receive traffic. You can do the latter step in several different ways, which is where all the fun comes in.

The big distinction between your options lies in the effect each has on the other devices on your network (computers, routers, and other appliances) -- intelligently routing network traffic between them, linking them together transparently, and so on. In some cases, the simplest end result is not the easiest to set up, so it pays to read through all of the alternatives before you decide which to tackle.
Bonding

From your network's perspective, the simplest option is channel bonding or "port trunking" -- combining both of the computer's interfaces into a single interface that looks like nothing out of the ordinary to your applications.

A combined logical interface can provide load balancing and fault tolerance. The OS can alternate which interface it uses to send traffic, or it can gracefully fail over between them in the event of a problem. You can even use it to balance your traffic between multiple wide area network (WAN) connections, such as DSL and cable, or dialup and your next door neighbor's unsecured Wi-Fi.

To bond two Ethernet interfaces, you must have the bonding module compiled for your kernel (which on a modern distro is almost a certainty), and the ifenslave package (which is a standard utility, although you might need to install it from from your distro's RPM or APT repository).

On a typical two-port motherboard, the Ethernet adapters are named eth0 and eth1, so we will use that for our example commands. With ifenslave installed, take both Ethernet adapters offline by running sudo ifdown eth0 and sudo ifdown eth1. Load the bonding module into the Linux kernel with modprobe. There are two important options to pass to the module: mode and miimon. Mode establishes the type of bond (round-robin, failover, and so on), and miimon establishes how often (in milliseconds) the links will be checked for failure. sudo modprobe bonding mode=0 miimon=100 will set up a round-robin configuration in which network packets alternate between the Ethernet adapters as they are sent out. The miimon value of 100 is a standard place to begin; you can adjust if it you really want to tweak your network.

To create an actual bond (which for convenience we'll call bond0), run sudo ifconfig bond0 192.168.1.100 up to assign an IP address to the bond, then run ifenslave bond0 eth0 followed by ifenslave bond0 eth1 to tie the physical Ethernet interfaces into it.

Round-robin mode is good for general purpose load balancing between the adapters, and if one of them fails, the link will stay active via the other. The other six mode options provide features for different setups. Mode 1, active backup, uses just one adapter until it fails, then switches to the other. Mode 2, balance XOR, tries to balance traffic by splitting up outgoing packets between the adapters, using the same one for each specific destination when possible. Mode 3, broadcast, sends out all traffic on every interface. Mode 4, dynamic link aggregation, uses a complex algorithm to aggregate adapters by speed and other settings. Mode 5, adaptive transmit load balancing, redistributes outgoing traffic on the fly based on current conditions. Mode 6, adaptive load balancing, does the same thing, but attempts to redistribute incoming traffic as well by sending out ARP updates.

The latter, complex modes are probably unnecessary for home use. If you have a lot of network traffic you are looking to manage, consult the bonding driver documentation. For most folks, bonding's fault tolerance and failover is a bigger gain than any increased link speed. For example, bonding two WAN links gives you load balancing and fault tolerance between them, but it does not double your upstream throughput, since each connection (such as a Web page HTTP request) has to take one or the other route.
Bridging

The bonding solution is unique in that both network adapters act like a single adapter for the use of the same machine. The other solutions use the two adapters in a manner that provides a new or different service to the rest of your network.

Bridging, for example, links the two network adapters so that Ethernet frames flow freely between them, just as if they were connected on a simple hub. All of the traffic heard on one interface is passed through to the other.

You can set up a bridge so that the computer itself does not participate in the network at all, essentially transforming the computer into an overpriced Ethernet repeater. But more likely you will want to access the Internet as well as bridge traffic between the ports. That isn't complicated, either.

Bridging requires the bridge-utils package, a standard component of every modern Linux distribution that provides the command-line utility brctl.

To create a bridge between your network adapters, begin by taking both adapters offline with the ifdown command. In our example eth0/eth1 setup, run sudo ifdown eth0 and sudo ifdown eth1 from the command line.

Next, create the bridge with sudo brctl addbr bridge0. The addbr command creates a new "virtual" network adapter named bridge0. You then connect your real network adapters to the bridge with addif: sudo brctl addif bridge0 eth0 adds the first adapter, and sudo brctl addif bridge0 eth1 adds the second.

Once configured, you activate the bridge0 virtual adapter just as you would a normal, physical Ethernet card. You can assign it a static IP address with a command like sudo ifconfig bridge0 192.168.1.100 netmask 255.255.255.0, or tell it to retrieve its configuration via DHCP with sudo dhclient bridge0.

You can then attach as many computers, hub, switches, and other devices as you want through the machine's Ethernet port, and they will all be able to see and communicate with each other. On the downside, if you have a lot of traffic, your computer will spend some extra energy passing all of those Ethernet frames back and forth across the two adapters.
Firewalling and gateway-ing

As long as you have excess traffic zipping through your computer, the OS might as well look at it and do something useful, such as filter it based on destination address, or cache repeatedly requested Web pages. And indeed, you can place your dual-port computer between your upstream cable or DSL connection and the rest of your local network, to serve as a simple Internet-connection-sharing gateway, or as a firewall that exerts control over the packets passing between the network interfaces.

First, you will need to bring both network adapters up and assign each a different IP address -- and, importantly, IP addresses that are on different subnets. For example, sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0 and sudo ifconfig eth1 192.168.2.100 netmask 255.255.255.0. Note that eth0's address is within the 192.168.1.x range, while eth1's is within 192.168.2.x. Maintain this separation when you add other devices to your network and you will keep things running smoothly.

Forwarding the packets between the Internet on one adapter and your LAN on the other is the purview of iptables, a tool for configuring the Linux kernel's IP filtering subsystem. The command sudo iptables -A FORWARD --in-interface eth1 --out-interface eth0 --source 192.168.2.0/255.255.255.0 -m state --state NEW -j ACCEPT allows computers on the LAN interface eth1 to start new connections, and forwards them to the outside world via the eth0 interface. Following that with sudo iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT keeps subsequent packets from those connections flowing smoothly as well.

Next, sudo iptables -A POSTROUTING -t nat -j MASQUERADE activates Network Address Translation (NAT), secretly rewriting the IP addresses of traffic from the LAN so that when it goes out to the Internet, it appears to originate from the Linux box performing the routing. This is a necessary evil for most home Internet connections, both because it allows you to use the private 192.168.x.x IP address block, and because many ISPs frown upon traffic coming from multiple computers.

Finally, run sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward to activate the kernel's packet forwarding.

This setup will pass traffic from your LAN to your Internet connection, but it does not configure the network settings on the LAN computers themselves. Each of them needs an IP address, gateway and network information, and some working DNS server addresses. If your dual-adapter Linux box is serving as a NAT gateway, you could easily have it provide that information to the clients as well, using DHCP. Your distro probably comes with the dhcpd package. Configuring dhcpd is beyond the scope of the subject here, but check your distro's documentation for Internet connection sharing and you will likely find the instructions you need.

Once you are comfortable using iptables to set up basic NAT and packet forwarding, you can dig a little deeper and learn how to use your box as a first-rate firewall by writing rules that filter traffic based on source and destination address, port, and protocol.
Isolating

Finally, you can always configure your secondary network adapter to work in complete isolation from the rest of your LAN.

Sure, there is little gain to such a setup for general-purpose computers, but it is a popular choice for certain Ethernet-connected devices that only need to send data to one destination. Homebrew digital video recorder builders use the technique to connect the HDHomerun HDTV receiver directly to a MythTV back end, thereby isolating the bandwidth-hogging MPEG streams from the LAN. The same traffic separation idea might also come in handy for other single-purpose devices, such as a dedicated network-attached storage (NAS) box, a networked security camera, or your Ethernet-connected houseplant.

For most devices, isolating your second adapter entails setting up the computer to act as a DHCP server as in the gateway example above, but without worrying about NAT rules routing between the secondary client and the rest of the network.
Caveat emptoring

So which technique is right for you? My advice is to think about what network trouble you most need to prepare for. If your dual-adapter box is a server with heavy traffic to handle, or you need to balance your traffic across two WAN connections, bonding is for you. On the other hand, if you just bought an HDHomeRun to add to your MythTV back end, think about attaching it directly to the spare interface.

Bridging and gatewaying are most similar, in that they use the dual-adapter box to connect multiple other devices into a single network. If that is what you need to do, consider that bridging works at the Ethernet link level, well below IP and TCP in the protocol stack. At the Ethernet level, the only sort of traffic shaping you can do is that based on the hardware MAC address of the computer. You have significantly more control when you run a full-fledged NAT gateway.

But whichever option you choose, remember that messing around with your network configuration can get you disconnected in a hurry if you make a mistake. For that reason, all of the above examples use commands that change the "live" system, but don't alter the configuration files Linux reads in at startup. If you make a mistake, a reboot should bring you back to a known working state.

If you decide you want to make your changes permanent, your best bet is to consult your distro's documentation. Distros vary slightly in where and how they store network configuration scripts (Red Hat uses /etc/sysconfig/network-scripts/, for example, while Ubuntu uses /etc/network/).

One you start digging into the details, you'll find even more possibilities for utilizing that second network adapter under Linux. But you should now be armed with a general idea of how to make both adapters talk to your network at the same time -- and you can do your part to eliminate network adapter wastefulness.

OfflineIMAP makes messages and attachments available locally

May 06, 2008 (9:00:00 AM) - 2 days, 5 hours ago

By: Ben Martin

OfflineIMAP allows you to read your email while you are not connected to the Internet. This is great when you are traveling and really need an attachment from a message but cannot connect to the Internet.

You can use OfflineIMAP to sync all your email during the night so that it is all instantly available when you wake up. This is a security trade-off -- you gain speed and availability for your email at the expense of having to properly protect the local copy of all the email that is created on your laptop.

OfflineIMAP is designed to contact your IMAP servers and create a local copy of your email in maildir format. You then point your mail client at the local maildir tree and use your mail client as normal. OfflineIMAP can then sync any changes, such as which messages you have read and deleted, back to the server. OfflineIMAP performs a bidirectional sync, so new messages from the server are downloaded to your local maildir while any changes you have made locally are sent to the IMAP server.

If your email client does not support maildir format, you can use OfflineIMAP to sync email between two IMAP servers and ultimately accomplish the same thing. This scenario is a little more complex, as you need to install an IMAP server on your laptop, tell your email client to connect to the IMAP server on localhost, then use OfflineIMAP to keep the IMAP server on your laptop in sync with your main IMAP server. An alternative is to use OfflineIMAP to sync to a maildir repository as normal and tell your local IMAP server to use that maildir as its email source. This thread contains information on setting up courier-imap locally to serve up your mail.

OfflineIMAP packages are available for openSUSE, Ubuntu Gutsy, and from the Fedora 7 and 8 repositories. If no packages exist for your distribution, the documentation provides good information on installation from source. I used OfflineIMAP 5.99.2 from the Fedora 8 repository. Version 5.99.2 does not support the Gmail account type. Version offlineimap-5.99.7 from the Fedora rawhide repository does support Gmail but has another bug relating to directory creation which causes synchronization to fail. For these reasons I would recommend using the IMAP account type and manually configuring it for Gmail until package repositories contain later versions of OfflineIMAP.

The primary configuration file for OfflineIMAP is $HOME/.offlineimaprc, and you can find a commented template configuration file online. The configuration file defines one or more accounts. For each account you must set the local and remote repository. A repository is configured in its own section and contains the type for storing email locally, or IMAP to connect to a mail server. When connecting to an IMAP server you can specify the hostname, username, and password, and whether OfflineIMAP should use SSL to connect to the IMAP server.

Configuration and setup is shown below. First I create the configuration file using the sample that comes with the offlineimap package. The accounts directive is set to contain a single Gmail account. This account has both a local and remote repository so that OfflineIMAP knows where to store email locally and what server to contact. The local repository is a maildir in my home directory. The remote repository uses the type IMAP instead of Gmail because of the version issues discussed above. I have selected an appropriate email address as the remoteuser so spambots will make themselves known. The nametrans directive lets you change the name of folders in the local repository. In this case I call re.sub twice to first change occurrences of INBOX, [Gmail], or [Google Mail] into root. One directory will be missed by this initial mapping, which is then accounted for by moving the Sent folder inside the root folder. This translation is useful because Evolution expects your inbox to be directly in the root folder of your IMAP account. If you change where the local copy of INBOX is stored, Evolution can more naturally interact with the local mail repository. You can also set up more elaborate folder name translations depending on your needs.

$ cp /.../offlineimap.conf ~/.offlineimaprc $ vi ~/.offlineimaprc accounts = my-gmail [Account my-gmail] localrepository = GMailLocalMaildirRepository remoterepository = GMailServerRepository [Repository GMailLocalMaildirRepository] type = Maildir localfolders = ~/.offlineimap-maildir-my-gmail sep = . restoreatime = no [Repository GMailServerRepository] type = IMAP remoteuser = i-am-a-spam-bot-log-me@gmail.com remotehost = imap.gmail.com ssl = yes remotepassfile = ~/.offlineimap-pass-my-gmail realdelete = no nametrans = lambda foldername: re.sub('^Sent$', 'root/Sent', re.sub('^(\[G.*ail\]|INBOX)', 'root', foldername)) ... $ mkdir -p ~/.offlineimap-maildir-my-gmail

With this configuration in place, just run offlineimap. It will check its metadata and notice that you haven't performed any previous sync and download everything from your IMAP server.

You should then have a complete copy of your email in maildir format on your local machine. See the client notes for information on configuring your email client to directly use the email from this maildir. When you want to send your changes back to the main IMAP server and check for new email, just run offlineimap again. Alternatively, you can use the autorefresh directive in ~/.offlineimaprc to tell offlineimap to continue to sync your accounts every n minutes.

Normally, you should run OfflineIMAP without any command-line options to bidirectionally synchronize your configured email accounts, but OfflineIMAP accepts some options that might be handy for casual use. The -a option accepts a comma-separated list of accounts that you wish to synchronize. This can be great if you are expecting a message but have some accounts defined that are slower to sync than others. The -u option lets you choose one of many interfaces to OfflineIMAP. The default is the Curses.Blinkenlights interface, which you might find to be too distracting. TTY.TTYUI displays a simpler and less distracting progress report. You can also change the interface that will be used by default by altering the ui directive in ~/.offlineimaprc. The -c option allows you to specify an alternate location to ~/.offlineimaprc for the configuration file.

Having the contents of your IMAP account available offline means you don't have to seek out an Internet connection just to get an attachment or wonder if a particular message has been cached locally by your email client. If you are working with moderate-sized attachments, the ability to schedule your laptop to grab your email an hour before you wake up can save precious time when you are traveling.

As the SCO rolls

By: Steven J. Vaughan-Nichols

Reality, as good writers know, is sometimes stranger than fiction. SCO's recent performance in the U.S. District Court in Utah is a perfect example. With years to prepare, SCO executives made some remarkable statements in their attempt to show that SCO, not Novell, owns Unix's copyright.

While this case is not about SCO's claims that IBM and other companies placed Unix IP (intellectual property) into Linux, Novell's attorneys decided that they would address this issue as well. One presumes that, since this may be their one and only chance to attack SCO's Linux claims in a courtroom -- what with SCO facing bankruptcy -- they decided to address this FUD once and for all.

Before getting to that, though, Novell hammered on Christopher Sontag, one time head of SCOSource, the division of SCO devoted to selling Unix's IP. Sontag, while dodging around what code SCO was actually selling -- UnixWare code or the whole Unix tree leading to UnixWare -- was finally cornered into admitting that SCO had received $16,680,000 from Microsoft and $9,143,450.63 from Sun and did not report these deals or income to Novell as it was required to do under the terms of the Novell/SCO APA (Asset Purchase Agreement).

On the second day of the hearing, April 30th, Sontag admitted that he did not "know if there's any code that is unique to UnixWare that is in Linux." He also admitted that he did not know of any analysis that showed there was any "legacy SVRX [Unix] software' in UnixWare." For someone who was in charge of SCO's Unix IP, who arranged to license it to Sun and Microsoft, and whose company was suing IBM for using Unix code in Linux, Sontag seemed remarkably ill-informed about exactly what it was that he was selling.

Sontag was followed on the witness stand by SCO CEO Darl McBride. With McBride on the stand, as can be seen in the trial's transcript, things became somewhat surreal. McBride, only minutes after Sontag said he didn't know if there was Unix or UnixWare code in Linux, said, "We have evidence System V is in Linux." McBride's most memorable moment came though when he claimed, after years of never being able to demonstrate any direct copying of Unix material into Linux that "Linux is a copy of UNIX, there is no difference [between them]."

In regards to SCO's May 2003 letter to companies that were using Linux and "Therefore, legal liability that may arise from the Linux development process may also rest with the end user," McBride claimed that "I don't see anything in here that says you have to take a license from us."

From there, McBride went on to say that simply because SCO had stated in this letter that "We intend to aggressively protect and enforce our rights" and added that the company had already sued IBM, that SCO didn't mean to imply that "we're going to go out and sue everybody else." At the time, most observers agreed that SCO certainly sounded like they were threatening to sue Linux end-users.

McBride then managed to entangle himself in how SCO accounted for the revenue it had received from Microsoft and Sun. The implication, which McBride vigorously denied, was that SCO had misled the stock-buying public in SEC documents in 2003 and 2004.

In what may prove to be a problem for Sun in the future, McBride also said that while SCO felt Sun had the right to open-source Unix in OpenSolaris, its most recent Sun contract was really about Sun "looking for ways to take their Solaris operating system and make it more compliant with the Intel chip set, which is what SCO has a deep history of doing."

Greg Jones, Novell's VP of Technology Law, was then sworn in. Jones testified that SCO's 2003 agreement with Sun "allows Sun, then, to release Solaris as open source under an open source licensing model, which they have done in a project called OpenSolaris. So it poses a direct competitive challenge to Linux and, certainly, to Novell, given that Linux is an important part of Novell's business. We are a Linux distributor."

Jones went on to say that if Novell had been aware of SCO making this deal with Sun, it would not have allowed it because, "It simply would not have been in Novell's commercial interests. In the fall of 2002, Novell had acquired Ximian, a Linux desktop company. We were exploring ways to get into the Linux market so enabling a competitor to Linux simply would not have been in Novell's interests. In the manner in which they entered this agreement, when they did it, they kept all the money. I assume that would have been their proposal but, fundamentally, it simply would have been contrary to Novell's business interests to enable something like this."

On the third day of the case SCO stuck to its guns, but added little more to their arguments.

On the case's final day, Novell simply stated that, when all was said and done, the APA made it clear that Novell, and not SCO, had the rights to Unix's IP. Therefore, SCO had no right to make these deals, and certainly no rights whatsoever to keep the funds from such deal.

In Novell's closing arguments, Novell also hit again on the SCO/Sun deal. Novell pointed out that "There's no question they (SCO) allowed Sun to open-source Solaris," and that while SCO executives would have you believe that giving Sun the right to open-source Solaris had no market value, SCO's engineers believed that open-sourcing Solaris had great value.

So, as the case moves on, SCO still seems unable to make any headway on its claims that the APA gave it the right to sell Unix's IP. Novell attorney's also made a point of demonstrating that SCO still has only naked claims, without any evidence, that there's any Unix code inside Linux. The Judge is expected to rule on the case in the near future.

Finally, Sun may yet have to contend with Novell's IP interests in OpenSolaris. Novell clearly doesn't believe Sun had the rights to open-source the System V code within OpenSolaris under its CDDL (Common Development and Distribution License).