Monday, April 28, 2008

Apple in rumored talks to license vibration feedback for iPhone

By Aidan Malley
Published: 02:45 PM EST

Apple has reportedly begun talks with Immersion to integrate haptic feedback into future touchscreen devices, addressing a complaint leveled against the iPhone by fans of physical buttons and keyboards.

An Apple worker has allegedly leaked to Palluxo that Immersion executives met twice with their Apple peers this week to discuss integrating Immersion's vibration response technology into the cellphone.

The meetings are said to parallel a more publicized deepening of relations between the two companies through indirect means: Immersion this week hired Clent Richardson, a former Worldwide Solutions Marketing VP at Apple between 1997 and 2001.

What exactly would be implemented is unclear. However, Immersion's most recent efforts have focused on using haptics to simulate physical button presses in an increasing number of touchscreen phones, including Sprint's soon-to-be iPhone rival, the Samsung Instinct.

The technique most frequently involves sending short, concentrated pulses through all or specific locations of a phone as the user taps buttons in software. In effect, haptics not only restore some of the feel absent in touchscreens, but also give users a way of confirming that the phone has recognized a command through more than just visual output.

A frequently-cited complaint regarding the iPhone is its lack of tactile feedback for converts from BlackBerries and other smartphones, many of whom expect the relative certainty of physical responses while navigating the interface or typing.

Neither Apple nor Immersion has commented on the rumored discussions, which are still early and so aren't expected to result in a finished product for some time.

Remote control function said hidden in iPhone beta firmware

By Aidan Malley
Published: 02:20 PM EST

Code in Apple's latest iPhone 2.0 beta firmware allegedly contains references to a utility that will let an iPhone or iPod touch play media from nearby iTunes sources

Pointing only to an unnamed person as the source for its leak, TUAW claims that multiple string entries in the cellphone's beta code refer to selecting from different media categories and include mentions of dialog boxes that let users choose their particular source.

The information suggests that the additions are the groundwork for an Apple TV-like feature, purportedly named iControl, that would let users play any iTunes content over a local network with features similarly to the dedicated media hub already on store shelves.

It's unclear as to whether the feature is strictly intended for streaming local content to the iPhone itself or can actively steer other devices, imitating a more advanced home theater remote such as Logitech's Harmony.

However, the listings as they appear would indicate a direct connection to a 2.5-year-old patent filing submitted by Apple in late 2005.

In the filing, the iPhone maker says it has invented a method that would let a portable media player view the contents of a local media server, such as a computer, and send instructions telling the media server to change tracks while it outputs content to a separate media receiver, whether physically attached to the computer (such as speakers) or remote (such as an Airport Express-like connection).

The aim is to let users steer media playback in a networked media system with existing hardware rather than dedicated controllers, Apple engineers state in the earlier patent.

Regardless of its actual purpose of iControl, there are no clues as to when, if at all, the software will be released for the company's handheld devices.

Europe's not finished with Microsoft

Microsoft's troubles in Europe are far from over, as Neelie Kroes, The EU competition commissioner, has warned. We review the past and future options for Microsoft and the European Commission.

Posted Richard Hillesley at 2:34PM, 24th April 2008

The 80s were the dog-eat-dog days of business. Top of the pile was Microsoft, the biggest and baddest of them all, led by Bill Gates, who invented the computer, the universe and everything.

Gates looked a bit like the nerd on the cover of Mad Magazine, made it to the cover of Time magazine, and was rich and successful beyond anybody's wildest dreams.

The view of Gates and Microsoft from inside the computer industry was more circumspect. PC software looked amateurish and nobody took it too seriously until the cultures began to collide in the business world during the middle of the decade. The affordable desktop computer, which sprang out of an unholy alliance between IBM, Intel and Microsoft, changed the face of computing in the home and in the work place, and for the most part was beneficial to the user, if only because it was cheap and accessible.

Microsoft always took more credit for this revolution than it probably deserved, but had a way of coming out on top, which owed everything to its early dominance of the operating system market for the IBM PC and its clones. From this dominance grew its prominence on the desktop, and the gradual eclipse of its competitors. The question that was always being asked of Microsoft was how much did the company owe its success to the quality of its software, and how much to the ruthlessness of its marketing?

From the beginning Microsoft had a special relationship with the original equipment manufacturers (OEMS), and made this relationship tell. Each innovation on the desktop, each new tool and the company that made it, either fell by the wayside or was assimilated into the Microsoft hive.

In the hive

Compaq had its arm twisted to stop it bundling Apple's Quicktime on the desktop. Internet Explorer, and later, the Windows Media Player, were bundled into the operating system, and given away free, sucking revenues and market share from Netscape, Real Networks and Apple. The squashing of Netscape and the subsequent death of the browser market led to Microsoft's conviction for monopolistic behaviour before the US antitrust courts.

Microsoft added platform-dependent "features" to Java to render Java's multi-platform features redundant, and when that ended up in court, developed the .NET platform, a very successful and popular alternative that reproduced many of the major features of Java with the notable exception of its multi-platform capabilities.

Kerberos, the encryption standard developed by MIT, was extended by Microsoft with the apparent objective of inhibiting interoperability in the workgroup server space and, in the words of Jeremy Allison of Samba: "these changes were treated as trade secrets, patented if possible, and only released under restrictive non-disclosure agreements, if released at all."

During the US anti-trust trials, Steven McGeady, a vice president of Intel, testified against Microsoft, Intel's most important trading partner, asserting that Microsoft intended to "embrace, extend and extinguish" competition by substituting open standards with proprietary protocols, and claimed that Intel had been warned to cease development of its Native Signal Processing audio and video technology, which promised to vastly improve user experience of the desktop - or else Microsoft would bypass Intel and develop Windows exclusively for AMD and National Semiconductor chips. "It was clear to us that if this chip did not run Windows it would be useless in the marketplace," McGeady testified. "The threat was both credible and terrifying."

Microsoft has always had an ambivalent relationship with the concept of interoperability and with the standards that make interoperability possible, tending to view the protocols and data formats it uses as "de facto" standards and "trade secrets" which it is free to "extend" with no obligation to share. This may not always be deliberate behaviour. Where there is a monopoly standards become incidental, an option rather an obligation. This tendency has been at the root of Microsoft's problems in the US and European courts. Microsoft is not being penalised for success, but for shutting the door on competition, and resisting any requests to modify its behaviour.

Into Europe

Microsoft's troubles in Europe began as early as 1993, when Novell complained that "onerous licensing conditions" imposed on OEMs by Microsoft was pushing NetWare out of the workgroup market.

In this market Novell had been the innovator, but Microsoft had muscled a napping, but still relevant, Novell out of the picture. Thus began a long history of litigation which culminated in the 17 September 2007 decision of the European Court of First Instance, which upheld the European Commission's decision to fine Microsoft and uphold the principle of interoperability.

The September judgement came at the end of a ten year case initiated by Real Networks, supported by Sun Microsystems, Novell and others, all arguing that innovative products were being pushed out of the market on the back of Microsoft's monopoly. Over the years each of these litigants withdrew from the case after doing deals with Microsoft worth billions of dollars, leaving the Free Software Foundation Europe (FSFE), the Samba Team, and their allies to fight the case to the finish.

As Jeremy Allison of the Samba Team told Groklaw: "the copyright in Samba is spread across many, many individuals, all of whom contributed under the GNU GPL 'v2 or later', now 'v3 or later' licenses. You can't buy that. There's nothing to sell. There's no point of agreement for which to say 'here are the rights to Samba, we'll go away'. We're in the, some would say unique, some would say unenviable position, of not being able to sell out. We can't be bought."

Much has been made of the Commission's insistence that Microsoft offer a version of Windows without Windows Media Player bundled, and the record fines imposed upon Microsoft. Improbably, some press coverage suggested that the European decision was a blow against innovation and competition. But the fines mean little more than a few pence on the price of Windows to a company as rich as Microsoft. The fines are a penance for Microsoft's prevarications and refusal to comply with the European courts.

The most important part of the judgement was the Commission's insistence that Microsoft be forced to publish the protocols used by Windows clients and servers under "reasonable" and "non-discriminatory" terms.

For this decision to have any meaning it was incumbent upon Microsoft to publish the protocols in their entirety, and to reflect the actual behaviour of Microsoft servers and clients in the real world - without evasion, inconsistencies, broken standards, obfuscations, fees or hidden patents - to comply with the commonly understood meaning of open standards and protocols as they have been implemented by other participants in the computing industry.

Microsoft has complied, with reservations, releasing protocols and data formats free for "non-commercial" use, (which immediately discriminates against competition), and making promises of future interoperability with its products. Unfortunately the promises have come with limitations, and the limitations target free and open source software.

As Thomas Vinje of ECIS, noted: "For years now, Microsoft has either failed to implement or has actively corrupted a range of truly open standards adopted and implemented by the rest of the industry. Unless and until that behaviour stops, today's words mean nothing."

Bursting the bubble

It is worth noting that once Netscape was trounced and Microsoft assumed the monopoly position in the browser market there was a five year gap of no innovation or competition between the release of IE6 and IE7. The subsequent release of IE7 was almost certainly prompted by the rapid rise of the open source browser, Firefox, and was notable for its failure to comply with W3C standards. Domminance of a market by a proprietary monopoly does not encourage innovation.

Throughout the European Commission's proceedings Microsoft claimed that the protocols were proprietary to Microsoft, and talked of protocols that were enclosed in a "blue bubble". Georg Greve, president of the FSFE explained: "The blue bubble was a theory that Microsoft invented in order to justify that it had kept parts of the protocol secret. They said that there's a difference between the internal protocols and the external protocols, if you want to describe them like that. They said that certain protocols that are so secret that they are in this blue bubble, because they had visualized this with a blue bubble, that this could never be shared without actually sharing source code, without sharing how the program exactly works. These protocols were so special that somehow, magically, you had to have the same source code to actually make that work. That was the blue bubble theory. So they said things like, 'HTML is outside the blue bubble, but the things you want us to disclose, that is inside the blue bubble.'"

In the wake of the decision, the US Assistant Attorney General for Antitrust, Thomas Barnett, made the highly contentious claim that the outcome, "rather than helping consumers, may have the unfortunate consequence of harming consumers by chilling innovation and discouraging competition," which drew a clear response from the EU competition commissioner, Neelie Kroes, that it was "totally unacceptable that a representative of the US administration criticised an independent court of law outside its jurisdiction."

In contrast, the American Antitrust Institute noted "the oddity of Barnett's statement" as both Europe and the US had found that Microsoft was "a monopolist which had acted to harm competition, and both insisted on interoperability in framing a remedy," and noted that "the EC has appropriately targeted strategies that would have the effect of deterring investment in innovations that might lead to a reduction of the monopolist's power and new benefits for consumers."

Talk is cheap

As the kerfuffle surrounding MS-OOXML demonstrates, the publication of protocols and data formats is not enough. To become truly universal, proprietary interest must be relinquished, and interoperability frameworks opened up to discussion, contribution and maintenance by third-parties through a neutral party (usually a standards body), and this is something that the European commissioners are beginning to understand.

As the MS-OOMXl kerfuffle has also demonstrated, such processes are highly political, and like the political process, can be influenced and misled.

But for the moment, Microsoft's tribulations in Europe are far from over. The Commission is investigating a complaint from Opera Software demanding that Internet Explorer comply with W3C standards, and one from the industry body, ECIS (European Committee for Interoperable Systems), in which Microsoft is alleged to have "illegally refused to disclose interoperability information across a broad range of products, including information related to its Office suite, a number of its server products, and also in relation to the so called .NET Framework.

The Commission's examination will therefore focus on all these areas, including the question whether Microsoft's new file format Office Open XML, as implemented in Office, is sufficiently interoperable with competitors' products."

In a press conference to announce Microsoft's latest fine, the EU competition commissioner, Neelie Kroes, emphasised that "a press release does not necessarily equal a change in a business practice. And if change is needed... then the change will need to be in the market, not in the rhetoric."

She also said: "There are lessons that I hope Microsoft and any other company contemplating similar illegal action, will learn.

. Talk, as you know, is cheap; flouting the rules is expensive.

. We don't want talk and promises, we want compliance.

. If you flout the rules you will be caught, and it will cost you dear."

Proprietary protocols are anathema to network computing and a deliberate hindrance to innovation and competition in computing environments. Few of the players, or users, maintain the illusion that a Microsoft-only world is either desirable or attractive - and the accusations of ballot stuffing, bribery, and undue political influence that surrounded the acceptance of OOXML as a standard by the ISO has only served to emphasise this reality.

Thursday, April 24, 2008

Apple's ultra-thin MacBook Air also slim on profits?

By Slash Lane
Published: 10:00 AM EST

In its determination to deliver the world's thinnest notebook, Apple admitted to sacrificing some speed and versatility, but a new analysis suggests that it may have given up some early profits as well.

Though the Cupertino-based Mac maker largely beat estimates for its second fiscal quarter on Wednesday, one sore spot appeared to be gross margin, which came in at about 100 to 200 basis points below most analysts' expectations at 32.9 percent.

An ensuing conference call was thus dominated by matter, as Wall Street folk routinely pelted management with questions on the perceived shortcoming as they sought a better understanding for their models going forward.

While management largely attributed the near 2 percent margin decline from the prior quarter to February's iPod shuffle price cut and a routine falloff in sales of Mac OS X Leopard and iWork, Piper Jaffray analyst Gene Munster offered his own explanation.

"We believe the margin outlook may be viewed negatively by investors, who likely wanted to see more of Apple's significant revenue upside trickle down to earnings," he wrote in a note to clients early Thursday morning. "The bottom line, we believe the margin was negativity impacted by a higher mix of Mac Book Air, which we now believe carries a lower margin."

On the bright side, Apple has likely built the potential for margin expansion into its MacBook Air design as adoption swells and component prices fall. What's more, Apple management appeared upbeat in stating that the Air has thus far shown little to no cannibalization effect on the company's other notebook offerings and thus could be considered largely responsible for helping push Mac unit growth to its highest rate in nearly two decades.

"The key takeaway from Apple's March quarter is that the Mac units grew at the highest year-over-year rates (units 51 percent and revenue 54 percent) in 17 years," Munster added in his note to clients. "Macs are the most meaningful category with the most potential and they are performing the best."

Looking ahead, the Piper Jaffray analyst said he's modeling conservatively for Mac growth rates to decline to 12 percent year-over-year for the remainder of calendar year 2008, which leaves "ample room for positive estimate revisions over the next 8 months."

"Mac growth is accelerating despite multiple quarters of strong growth, iPod sales are stabilizing with higher average-selling-prices due to the touch, and the iPhone will be significant in the second half of the year with the release of new hardware and software," he wrote.

Bacula: backups that don't suck

By Robert D. Currier on April 23, 2008 (9:00:00 AM)

Good systems administrators know that implementing a robust backup procedure is one of their most important duties. Unfortunately, it's also one of the most complex and least fun. When the phone rings and there's a panic-stricken user on the other end who has just lost a crucial document, you need to be confident that you can promptly recover his missing files. Failure to do so can bring about a speedy end to a promising career in systems administration. So what's a budding sysadmin to do? Download the latest release of Bacula and watch those backup woes disappear into the dark of night.

Led by head developer Kern Sibbald, the Bacula team has built an open source backup solution that is fast, reliable, and exceptionally configurable. Bacula is not a monolithic application, but rather a collection of programs that together provide a robust backup, recovery, and verification toolset suitable for five or 500 systems.
Getting started

For this review we tested Bacula on a single CentOS 4 server using the file system as our backup medium. In our production environment we use Bacula to manage more than 500GB of backups from multiple clients using a tape robot. However, the lengthy process of configuring the tape robot and multiple clients makes this a daunting task for the first-time user. We recommend you stick with a single host for your test drive if you've not worked with Bacula before.

Bacula is available as a package using the standard package management tools yum or apt-get, but we prefer to install the application from source. When you're just getting started with Bacula, building from source gives you a better feel for how the application operates.

After downloading and uncompressing the project's source code you'll need to run the configure script. Bacula's configure script is well written and produces meaningful debug output but requires a large number of command-line settings. To simplify the configuration process the development team has included a suggested list of options that handle most environments. Start with these settings and modify them where appropriate. Case in point: the choice of database. Bacula is a database-driven application and requires MySQL, SQLite, or PostgreSQL. Make sure you have one of these databases installed prior to configuring Bacula. Once you've run the configure script, you can build Bacula using the normal make; make install process.

Next, you must create the Bacula database and tables and set appropriate access permissions. Located in Bacula's bin directory, the script create_bacula_database will create Bacula's database after determining what database you are using. After a successful termination of this script, executing make_bacula_tables will create and populate the database tables. Finally, grant_bacula_privileges will establish the necessary access controls. A word of warning: grant_bacula_privileges creates an unrestricted access policy for the user bacula. You should modify this policy to suit your needs. At a minimum you should consider setting passwords for the MySQL users root and bacula.

After successfully building and installing Bacula, the next step is setting up the config files. That can take some time, but once you've gotten them working you won't have to touch them again except when you add a new client or fileset.

To understand the configuration files, you need to understand how Bacula operates. Bacula comprises four main modules: the director, the storage daemon, the file daemon, and the console. The director is the "boss" of Bacula, providing job scheduling, backup media descriptions, and access control. In a typical Bacula deployment there is only one director.

The storage daemon handles all communications with the defined backup architecture: disks, single tape drives, tape robots, and optical drives. As with the director, there is usually only one storage daemon running per Bacula installation, but the storage daemon may have many backup devices defined.

The file daemon is installed on each client machine and provides the communications link between the storage daemon and the client. It needs access to all files that will be backed up on that client.

The console handles communications between the administrator and Bacula. The adminstrator can start or stop jobs, estimate backup sizes, and review messages from Bacula. Consoles are available that use wxWidgets, GNOME, and Web browsers, but we prefer the TTY version of bconsole.

The bacula-dir.conf file, which controls the director, contains detailed information on the clients to be backed up, job definitions, filesets, and job schedules. One of the outstanding features of Bacula is how close the configuration files as supplied come to being ready to rumble. While a large installation with many clients will require significant editing to the configuration files, our test of Bacula required us to only to give Bacula a list of files to back up.

Bacula typically installs to a subdirectory named bacula in the home directory of the user that is deploying the software. The configuration files are located in the bin subdirectory of /home/user/bacula. Switch to this directory and edit bacula-dir.conf. Search for the section of text beginning with "# By default this is defined to point to the Bacula build directory to give a reasonable FileSet to backup to disk storage during initial testing." Directly below these lines should be: File = /home/username/bacula/bacula-2.2.8. This is the FileSet definition, which controls what files and directories are to be backed up. You may change this definition to a directory of your choice or leave it as is during testing.
Your first backup

By default Bacula uses the filesystem as its backup media. To keep things simple we won't attempt to configure Bacula to use a tape drive -- we'll stick with the preconfigured definitions.

Change directories to Bacula's bin folder. Execute the bacula script with start as an argument: ./bacula start. You should see the following three lines:
# Starting the Bacula Storage daemon.
# Starting the Bacula File daemon.
# Starting the Bacula Director daemon.

If Bacula successfully started, that's great -- you're ready to run your first backup. If not, carefully read the error messages and double-check the bacula-dir.conf file. Make sure you've pointed Bacula to the directory you wish to be backed up and that the directory exists and is readable.

The final step in taking Bacula for a test drive is using the console to initiate the job. From the Bacula bin directory execute the bconsole script. Bconsole should return an asterisk prompt. At the prompt enter run. Bconsole will display a list of defined jobs for you to pick from. Since we're only backing up one machine (our test box) you should only have one job resource to choose from. Select the job and press Enter. Bacula will prompt you with a short list of settings, including client name, backup type (full, differential, or incremental) and the storage device. If all the settings look correct, enter yes to kick off the backup.

After a short wait, Bacula should return a "backup completed successfully" message with details including the file(s) backed up, the amount of space the backup consumed, compression ratios, and so on. Congratulations -- you've just run your first Bacula backup.
Only the beginning

Of course at this point we've barely scratched the surface of Bacula's many features -- the user manual is 665 pages long. New users should read the excellent tutorial before embarking on a multi-client Bacula installation.

Despite its apparent complexity, Bacula is straightforward to install and configure, comes with excellent documentation, and works right out of the box. If you have been considering moving from basic tar or rdump backup processes to a more substantial package, you can't go wrong by choosing Bacula. It definitely doesn't suck.

Wednesday, April 23, 2008

Notebook Company Tech Support Comparison

by Kevin O'Brien

Computer problems can be one of the most frustrating situations any person can go through, especially if it is your primary computer for school or work. Downtime can cause missed assignments, projects, or worse if you manage all your finances through a computer and can't pay a bill on time. We decided to call up Dell, Toshiba, Lenovo, HP, Gateway, and Apple to who see was the easiest to deal with, and how long the average call was.

For our calls, we scored each company in multiple areas, including menu navigation, how long until you left the menu system, time to service rep, and total call length. The basic question asked to each company was, "Is there a way to manually eject a CD stuck inside my optical drive". The expected answer was using a paperclip in the manual eject hole, but as you might expect, not all companies came to that answer right away. In one case we were offered a brand new drive at first, and in another we ended up being cut off after 35 minutes on hold.

Dell Support Call

For our Dell support call, we used the Home and Home Office contact number listed on the site. This was 1-800-624-9896, which listed 24/7 availability. This number was fairly easy to find using Google, and a bit of navigating on the Dell site.

The first interaction the user gets is a voice activated menu system which was not very difficult to navigate. It took me about 30 seconds to get through the system, and 10 seconds later I was speaking to a service rep. The service representative was very friendly, and as soon as I mentioned my problem he prompted me to find a paper clip and insert it into the small hole on the side of the CD tray. After thanking him for his help, the total time on the phone was 3 minutes and 21 seconds.

Toshiba Support Call

For the Toshiba support call, we used the computers support line, listed as 1-800-457-7777. The website does not display if this is a 24/7 support line on the main "Contact Toshiba" webpage, but it worked for our Saturday afternoon call.

The support line was very easy to navigate, using all phone prompts to navigate the menus. Total time to navigate the menus and be routed to a support representative was 60 seconds. The representative was very helpful, and immediately knew about the pinhole on the side of the CD tray. Total time of the call was 3 minutes and 40 seconds.

HP Support Call

For our HP support call, I used the standard support line, which offered 24/7 availability. The number listed was 1-800-474-6836. Finding this number was very simple, using both Google and a little site navigation.

The HP support line was all voice navigated and the most frustrating to navigate. You are prompted for the type of product, as well as the product model name. In the case of our dv6500t test notebook, it was quite challenging to get the system to understand what I was saying. The voice prompt misunderstood me three times when I said "DV...." and routed me directly to the TV technical support center. On the 4th try, speaking very carefully, I finally got it to recognize what I was saying. After 4 minutes, I had finally made it to a human.

The technical service representative was frustrating to talk with, and would not assist me without a valid serial number. With some poking and prodding, he put me on hold to talk with his manager to find out if he was allowed to tell me how to manually eject a CD without a verified serial number. After 4 minutes of being placed on hold, he came back and thought that a paper clip used on the tray release hole would do the trick. The total time on the phone was 12 minutes and 30 seconds.

Lenovo Support Call

For our Lenovo support call, we used the United States 24/7 support line. The listed number was 1-800-426-7378, with about 200 additional numbers depending on what country you were located. Finding this list was quite simple using the help of Google and minor site navigation.

One quirk that cropped up with the Lenovo support line was when I first attempted to use my Skype VOIP line from my home. When the support number was dialed, the phone just rang and rang with no pickup from the other end. When I switched to my cell phone to make the call, the line picked up on the first ring.

The Lenovo phone system used a combination of voice and phone prompts to navigate the system. It took about 60 seconds to get routed to a human on the system. The representative required all of my computer and personal contact information before he would start, which added a bit of time to the service call.

After 4 minutes, I started to explain the problem, and received a very odd answer. He explained the drive did not have a manual release to eject a stuck CD, but they would be more than willing to send out a new drive for my notebook. While a new drive would be nice, what about my precious CD that was stuck? After additional hinting towards a possible fix, he finally suggested that a bent paperclip might work using the manual tray release hole. The total time on the phone was 7 minutes and 20 seconds.

Apple Support Call

The Apple technical support line was very easy to find, and it was listed as 1-800-275-2273. The Support page did not shot that the line was 24/7, but it worked just great on the weekend when I called.

Let me preface this support call by saying that I did not expect a quick answer for my standard stuck CD question. The MacBook Pro does not have a quick release on the CD tray, and it requires service to fix. I was hoping for a quick answer stating that fact, and hopefully some recommendation on the closest Apple Store.

The phone system used both voice and phone prompts to navigate to the correct area, and quickly routed me to a human in about 60 seconds. The service rep was very friendly, and was quick to help me once I gave him my information. When I explained the problem of the stuck CD, he asked questions about the noises the system was making, and if dragging the disc to the trash bin would eject the CD.

When I explained that the CD never fully clicked into position, he put me on hold to further research the problem. I was expecting a quick return, and explanation that it would require service, but I never heard from the guy again. I was on hold for 35 minutes and at the 39 minute total time mark, was disconnected from the call and routed back to the original Apple technical service menu. If I was real customer I would have gone crazy at that time.

Since the service was so bad the first time, I gave it another shot the next day during normal business hours. Getting through the phone prompts were just as easy as before and this time around the lady whom I spoke with seemed to be more eager to help me out. We went through some of the same troubleshooting steps such as pressing the eject button, restarting the computer, and so on. In the end she decided to send me to an Apple store for further help, and went as far as scheduling an appointment at my local genius bar. Total time was 6 minutes.

Gateway Support Call

Gateway technical support is set up differently from other manufacturers; they have different contact numbers depending on if the system was purchased directly from Gateway or instead, a retail store. The number I called was the retail support line, which was 1-408-273-0808.

The Gateway support line leads you through multiple voice prompts that require you to explain your intentions as well as share your model number. During this call the system did require multiple corrections, but unlike the HP line, never transferred me to a different support area. In all it took about 2 minutes to get through the voice prompts and finally speak with an agent. The support agent was very friendly and, even though I did not have my serial number handy, helped me along with my support request. Without referencing any support material he knew about the manual release off the top of his head, and quickly solved my problem. With a friendly reminder about locating my serial number in the future, I was done with the call in 4 minutes and 33 seconds.

Conclusion

Besides realizing that I have way too much free time on the weekends, I found that simple things can make the technical support experience either wonderful or frustrating. The interactive voice prompts were hands down the worst part of most calls. With HP, I was rerouted to the wrong area multiple times because the computer kept thinking my "DV6500t" model number meant I was saying "TV", which then cut me off and routed me to the wrong system. In others you had to pronounce the category you wanted multiple times before it would understand you. Having a simple "press 1 for computers ..." made the interaction much easier.

Overall, Dell and Toshiba were the best for ease of access and quick resolution. They had the easiest phone menu systems to navigate, fastest times to talk with an agent, and the shortest overall call length. Gateway ranked 3rd, with the phone prompts being the only negative aspect of the phone call. Lenovo was also very good, but a new optical drive isn't always the best answer when you are trying to get work done now. Apple ranked in the middle with Lenovo when you averaged the poor call experience with the excellent call the next day. HP came in last with the frustrating phone system that was nearly impossible to navigate. While your experience could vary greatly depending on the individual service representative, we hope this gives you an idea of how each company handles support and what to expect from each of them.

Monday, April 21, 2008

PayPal may block Safari users

By Aidan Malley
Published: 06:20 PM EST

As part of a multi-tiered approach to guarding against online fraud on its site, PayPal says it will block the use of any web browser that doesn't provided added validation measures, potentially restricting the current version of Safari from the e-commerce site.

The money transfer service's Chief Information Security Officer, Michael Barrett, makes the new policy clear in a white paper (PDF) posted this week, which highlights the browser as a key means of putting an end to phishing (false website) scams alongside such steps as blocking fraudulent e-mail messages and criminal charges.

When addressing web access, Barrett argues that any user visiting a financial site such as PayPal should know not only that their browser will block fake sites meant to steal information, but also that the browser can properly indicate a legitimate site. Without either precaution, visitors may not only be victims of scams but may lose all trust in an otherwise safe business. This doubly harmful outcome is likened to a car crash without protection.

"In our view, letting users view the PayPal site on one of these browsers is equal to a car manufacturer allowing drivers to buy one of their vehicles without seatbelts," the expert says.

To that end, PayPal is said to be implementing steps that will first provide warnings against, and eventually block, any browser that doesn't meet these criteria.

Most modern web browsers, including Firefox and newer versions of Microsoft's Internet Explorer, are able to support at least basic blocking of phishing sites. The newest, such as Internet Explorer 7 or the upcoming Firefox 3, also support a new feature known as an Extended Validation Secure Socket Layer (EV SSL) certificate. The measure of authenticity turns the address bar green and identifies the company running the site, letting the user know any secure transactions are genuine.

Safari, however, lacks either of these features and so could fall prey to the blocks and warning messages. Barrett doesn't mention the browser by name but notes that any "very old and vulnerable" software would ultimately be blacklisted from the future update to PayPal's service, placing Safari in the same category of dangerous clients as Microsoft's ten-year-old Internet Explorer 4.

Apple's approach to browser security has so far been tentative. The Mac maker has briefly incorporated Google's database of fraudulent sites into a beta builds of Mac OS X Leopard this past fall, only to pull the feature in later test versions. Release builds of the stand-alone browser for both Macs and Windows PCs have also gone without the anti-phishing warnings, but notably leave code traces inside the software that raise the possiblity of improvements through a later update.

Apple hasn't responded to the white paper but is likely to face pressure as PayPal and similar institutions ask for an all-encompassing approach to fighting scams that involves EV SSL and other software techniques. Internet Explorer 7's debut has already had a demonstrated effect on customers, who are more likely to finish signing up for PayPal knowing that the web browser has authenticated the registration page.

"We couldn’t eradicate this problem on our own – to make a dent in phishing, it would take collaboration with the Internet industry, law enforcement, and government around the world," Barrett explains.

Open source testing tools target varied tasks

April 18, 2008 (4:00:00 PM)
By: Mayank Sharma

Testing is an important function of the software development process, no matter how big or small the development project. But not every company or developer has access to professional testing tools, which can run into hundreds and even thousands of dollars. The good news is that they don't need them, thanks to the tons of freely available open source software testing tools.

In simple terms there are two major approaches to testing software -- the manual way (a summer intern with a checklist) or through an automated program. With automated testing programs, you can spend a lot of money procuring these tools or distracting yourself from the task at hand by rolling out your customized automated testing software.

Instead, you could head over to sites like Open Source Testing (OST), QAForums, Open Testware, and others that catalog various testing tools and look for something that works for you.

"The largest category of open source testing tools," says Mark Aberdour, who manages OST, "is functional testing tools. This can cover a range of practices from capture-replay to data-driven tests, from Web application testing to Java application testing, and lots more in between." Aberdour, who before his current software development role spent 10 years on the other side of the fence in software testing and test management, says that the list of open source testing tools also includes many performance testing tools and test management/defect tracking tools, as well as a good number of security testing tools.

"If you include unit testing tools, then there are large numbers of tools for the more popular languages, particularly where test-driven development (TDD) is more popular," he says. There are several tools for testing Web and Java applications, but "as is the way in open source," Aberdour says, "if there's an itch, someone will scratch it, so there are tools available for all manner of obscure needs." OST lists testing tools for languages such as PHP, Perl, Ruby, Flash/ActionScript, JavaScript, Python, Tcl, XML, and so on. "The list is probably bigger than I've brought together on OST, and TDD practitioners should head over to testdriven.com, which has more focus on that area."

So how do they compare with the expensive proprietary tools? "In some cases, very well," says Aberdour. He points out WebLOAD and OpenSTA as examples that hold up well in the performance testing market -- no surprise there, since they were both originally commercial tools. Underlining his point, Aberdour says, "You have tools like Wireshark, which is huge in the security market, and Bugzilla and Mantis in the defect tracking sector. In functional testing there are a number of really great tools (Selenium, Abbot, Jameleon, jWebUnit, Marathon, Sahi, soapui, and Watin/Watir to name a few) with strong feature sets."

According to Aberdour, in addition to the tool itself, the advantage of the commercial tools is often their integration with an automated software quality (ASQ) suite, and of course an established company behind them, which will feature in a lot of people's selection criteria. Here again things are looking good for open source testing tools. Aberdour points out the RTH test case tool which integrates with Watir, HttpUnit, JUnit, MaxQ and the commercial WinRunner. Even bug trackers like Mantis and Bugzilla integrate with functional testing tools and test case tracking tools.

When it comes to reliability and accuracy, Aberdour says the classic open source arguments apply. The popular tools are tested and used by a high number of people, and many of the tools have evolved large and sustainable communities with many people feeding back on quality-related issues. "For many of these tools, innovation is high and release processes rapid. Yet it is not a free-for-all -- the code base may be open, but write access to the repository is closely guarded, and developers have to earn the right to commit."

According to Aberdour, the main issue for people using the tools is not whether their need is served, since in most cases you will find a tool to do the job, but whether the tool is mature enough to invest in. "Test automation is a major undertaking that has high costs in terms of upskilling your test team and creating test scripts, etc., and people need to know that the tool is a good investment, whether or not there is a license fee to pay. People will be asking if the tool is going to be around in five years' time, what levels of support are available, how good is the feature innovation, bug fixing, release schedules, and so on." Aberdour thinks there is a big gap in selection and evaluation support and paid technical support services, which is what he wants to focus on next with OST.

Of course there are strong and sustainable communities around the more popular tools that provide excellent support, but Aberdour says the market isn't at the point yet where there is a lot of commercial support available. Yet there are some examples of support companies evolving around open source products, and a lot of the product teams will hire themselves out. "In a commercial sense this market is still quite young, but it will mature, and when the product integration, community maturity, and commercial support are right, it has the potential to be highly disruptive. Gartner reckons the ASQ market is worth $2 billion a year, and while Hewlett-Packard and Mercury absolutely dominate, there are hundreds of smaller proprietary vendors at real risk from open source disruption. Things are certainly moving in the right direction."

Open source testing tools offer great performance and are a bargain compared to proprietary testing tools. The lack of a formal and consistent support structure might work against some tools being used to test mission-critical apps. But if you are an open source developer or with a software development company pondering over your testing budget, spend some time checking out the testing tools Web sites and forum boards. You might save yourself some serious bucks.

Howto setup Database Server With postgresql and pgadmin3

Posted by admin on April 14th, 2008

PostgreSQL is a powerful, open source relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL92 and SQL99 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others.

pgAdmin III is the most popular and feature rich Open Source administration and development platform for PostgreSQL, the most advanced Open Source database in the world. The application may be used on Linux, FreeBSD, OpenSUSE, Solaris, Mac OSX and Windows platforms to manage PostgreSQL 7.3 and above running on any platform, as well as commercial and derived versions of PostgreSQL such as EnterpriseDB, Mammoth PostgreSQL, Bizgres and Greenplum database.

pgAdmin III is designed to answer the needs of all users, from writing simple SQL queries to developing complex databases. The graphical interface supports all PostgreSQL features and makes administration easy. The application also includes a syntax highlighting SQL editor, a server-side code editor, an SQL/batch/shell job scheduling agent, support for the Slony-I replication engine and much more. Server connection may be made using TCP/IP or Unix Domain Sockets (on *nix platforms), and may be SSL encrypted for security. No additional drivers are required to communicate with the database server.

Install Postgresql and pgadmin3 in Ubuntu

PostgreSQL 8.2 version will be installed in Ubuntu 7.10 (Gutsy Gibbon)

sudo apt-get install postgresql-8.2 postgresql-client-8.2 postgresql-contrib-8.2

sudo apt-get install pgadmin3

This will install the database server/client, some extra utility scripts and the pgAdmin GUI application for working with the database.

Configuring postgresql in Ubuntu

Now we need to reset the password for the ‘postgres’ admin account for the server

sudo su postgres -c psql template1
template1=# ALTER USER postgres WITH PASSWORD ‘password’;
template1=# \q

That alters the password for within the database, now we need to do the same for the unix user ‘postgres’:

sudo passwd -d postgres

sudo su postgres -c passwd

Now enter the same password that you used previously.

from here on in we can use both pgAdmin and command-line access (as the postgres user) to run the database server. But before you jump into pgAdmin we should set-up the PostgreSQL admin pack that enables better logging and monitoring within pgAdmin. Run the following at the command-line

we need to open up the server so that we can access and use it remotely - unless you only want to access the database on the local machine. To do this, first, we need to edit the postgresql.conf file:

sudo gedit /etc/postgresql/8.2/main/postgresql.conf

Now, to edit a couple of lines in the ‘Connections and Authentication’ section

Change the line

#listen_addresses = ‘localhost’

to

listen_addresses = ‘*’

and also change the line

#password_encryption = on

to

password_encryption = on

Then save the file and close gedit.

Now for the final step, we must define who can access the server. This is all done using the pg_hba.conf file.

sudo gedit /etc/postgresql/8.2/main/pg_hba.conf

Comment out, or delete the current contents of the file, then add this text to the bottom of the file

DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database
# super user can access the database using some other method.
# Noninteractive
# access to all databases is required during automatic maintenance
# (autovacuum, daily cronjob, replication, and similar tasks).
#
# Database administrative login by UNIX sockets
local all postgres ident sameuser
# TYPE DATABASE USER CIDR-ADDRESS METHOD

# “local” is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5

# Connections for all PCs on the subnet
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all [ip address] [subnet mask] md5

and in the last line, add in your subnet mask (i.e. 255.255.255.0) and the IP address of the machine that you would like to access your server (i.e. 138.250.192.115). However, if you would like to enable access to a range of IP addresses, just substitute the last number for a zero and all machines within that range will be allowed access (i.e. 138.250.192.0 would allow all machines with an IP address 138.250.192.x to use the database server).

That’s it, now all you have to do is restart the server

sudo /etc/init.d/postgresql-8.2 restart

That’s it you can start using postgresql in Ubuntu

Create a Database from command line

You can also use pgadmin3 for all postgresql related

To create a database with a user that have full rights on the database, use the following command

sudo -u postgres createuser -D -A -P mynewuser

sudo -u postgres createdb -O mynewuser mydatabase

Rethinking Gobuntu

Sun, 2008-04-20 20:22 — Magic Banana

In a few days, both the gNewSense and Gobuntu project will release a new version of their distributions. Based on Ubuntu Hardy Heron, they will aim at satisfying the most exigent users in terms of freedom. This apparent duplication of work may not perpetuate. Indeed, some developers behind Gobuntu (including Mark Shuttleworth himself) think of "channelling the energy from Gobuntu into gNewSense".

I was visiting the archives of the gobuntu-devel mailing list when I found this message from Mark Shuttleworth. Questioning the future of the Gobuntu project, he posted this message six days ago:

"Perhaps we really are on the wrong track, that the only way to meet the
needs of the gNewSense folks is to have completely different source
packages to Ubuntu. If that is the case, then I think it would be better
to channel the energy from Gobuntu into gNewSense.

I had hoped to see more participation and collaboration around Gobuntu
because of the benefits of keeping up with the standard Ubuntu (regular
releases, security updates etc). However, it seems that the audience for
a platform like this is willing to accept infrequent releases and less
maintenance in return for a platform which can be modified more
radically. That's OK, it's just a bit unexpected - I thought we could
get the best of both worlds, with six-monthly releases of something that
excluded *binary package* that were controversial in the eyes of the
FSF, but retained access to everything else in Ubuntu.

I don't mind having been wrong in that expectation, I can see the
arguments in favour of less collaboration in the case where it is more
important to be different than to have infrastructure in common, and
from what I've seen on this list, the desire to be different (have
different source packages as well as binary packages) is stronger than
the desire to collaborate (share infrastructure, release cycles etc).

I'm not sure that the current level of activity in Gobuntu warrants the
division of attention it creates, either for folks who are dedicated to
Ubuntu primarily, or to folks who are interested in gNewSense. I would
like us to have a good relationship with the gNewSense folks, because I
do think that their values and views are important and I would like
Ubuntu to be a useful starting point for them. But perhaps Gobuntu isn't
the best way to achieve that.

So, I would like to hear from the gNewSense guys how they would like to
be involved in Ubuntu, to help ensure that Ubuntu is a useful starting
point for their important work. If Gobuntu is not the best way to
achieve that, then I think we should stop working on it and encourage
folks who want that to focus their efforts on gNewSense, while at the
same time figuring out how Ubuntu can be more useful for gNewSense.

Mark"

First of all, let us notice how Mark Shuttleworth cares about the users' will of freedom. Although he can be considered as the spiritual father of Gobuntu, he somehow encourages the Gobuntu's developers to move to gNewSense if the purpose of the Gobuntu project would be better achieved through gNewSense. He even thinks of helping gNewSense by making Ubuntu "more useful" for it. Such a philanthropy is remarkable.

Thus, Ubuntu's leader suggests that the users would benefit from having one single Ubuntu derivative strictly following the Free Software Foundation views on a truly Free Operating System. This GNU/Linux distribution would be gNewSense because it maintains independent repositories, whereas Gobuntu shares them with Ubuntu. Let us recall the advantages of this independence:

* The ability to modify a package containing some proprietary blobs (e.g., the Linux kernel) or presenting other troubles (e.g., Firefox, which, somehow, encourages the installation of proprietary extensions) instead of removing it.
* No affiliation with any repository of proprietary softwares (enabling the Restricted or the Multiverse repository in Gobuntu is a matter of one click in Synaptic).
* Beyond the repository themselves, working on Ubuntu's package system means using Launchpad, a Web application distributed under a proprietary license.

In addition, gNewSense is endorsed by the Free Software Foundation. Kurt von Finck, one of the main developer behind Gobuntu, focuses on this while confirming the bridge-building with gNewSense. Here is what he wrote a few hours before Mark Shuttleworth's message:

"Gobuntu is 100% free software. The CD image provided to you
contains nothing but free code. But with Gobuntu you are free to add (or
subtract) what you see fit, and the Ubuntu repositories make it
trivially easy to do so. Thus, while Gobuntu is free software as defined
by the FSF, RMS will not recommend it personally.

Now, all that having been said, please be aware that Paul O'Malley from
the gNewsense project and myself have plans to discuss these very issues
at the Ubuntu Developer's Summit in May. Questions vis-a-vis Gobuntu and
gNewsense are arising frequently, and need to be answered definitively
so we can all get back to work and stop playing politics.

Before anyone takes my remarks here as some sort of "official" statement
(which they most assuredly are not) I would ask that you refrain, and
instead wait for the fruits of the UDS conversations between Mark, Paul,
FSF staff, and yours truly.

We'll get the discussion times on the UDS agenda, and everyone is
welcome to participate when the time comes.

Flame on.

--

./k

Kurt von Finck

Senior Ubuntu System Support Analyst
Canonical, Ltd."

All in all, the discussion on the gobuntu-devel mailing list is very similar to the one I had a few months ago. For the targeted public, it seems that the gNewSense advantages in terms of ideological pureness rise above the more pragmatical concerns Mark Shuttleworth and all Gobuntu's developers thought to be essential.

Furthermore, from the next release of Ubuntu, the need for the Gobuntu project becomes less pressing. Indeed, on the gobuntu-devel mailing list (definitely an interesting source of information!), I discovered this other message:

"Current Ubuntu CD images now have a "Free software only" checkbox on the
"Boot options" menu (select language and then press F6 twice). This
should make things easier for people who happen to have an Ubuntu CD to
hand and want to install a system without anything from the restricted
component.

As of last week, this actually works when installing from the desktop CD
too. While there are some packages from restricted in the live
filesystem, when you select this boot option they will be removed from
the target system after the bulk file copy has taken place. It turns out
that this wasn't as hard to do as I'd thought - a mere 13 lines of code!

Cheers,

--

Colin Watson"

Although no official position has been decided yet, Gobuntu Hardy Heron may be the ultimate version of this project. Some of its developers may move to gNewSense while some other may continue to help in tracking proprietary blobs in Ubuntu's repositories. GNewSense would eventually gain more developers and some help from the Ubuntu project. Gobuntu would, somehow, be integrated to Ubuntu through a boot option. In the end, the winner would neither be gNewSense, nor Gobuntu. It would be the user.

Add faceted search to Thunderbird with Seek

By Dmitri Popov on April 21, 2008 (9:00:00 AM)

Do you struggle to keep tabs on your Thunderbird inbox? The SIMILE Seek extension might be the answer to your problems. The extension adds faceted browsing to Thunderbird, which allows you to search and manage your email messages in a radically different way than you are used to.

To better understand how faceted browsing works, take a look at sites like the venerable Open Directory Project. The site allows you to narrow your search by filtering the data by the criteria, or facets, you choose. For example, you may start with a broad category called Computer, then narrow it to Open Source, then Software, and so on. Each time you choose a category, you effectively add a facet to your search, and thus make it more precise.

The Seek extension can help you to search your email in a similar manner. Once installed, Seek adds a panel to Thunderbird's interface with several default facets. Each facet displays search results for the criteria it represents. For example, the Tag facet shows all the tagged emails grouped by their tags. The Recency facet displays all emails received today, the day before, within a week, and so on.

Each facet not only provides a quick overview of the messages that match the facet's criteria, but also lets you filter the results. For example, you can view messages tagged as Important by clicking on the Important tag in the Tag facet. Another way to narrow search results is to use the "Type to filter" field in some of the facets. Start entering a search criteria in the field, and the facet narrows the search results as you type. You can, of course, refine the search results by combining several facets. For example, you can quickly find messages sent directly to you (the To CC/me facet) by a particular person (the From facet) the day before (The Recency facet). You can see the number of messages matching the chosen facets in the Results pane, which contains a few other useful features. Using the search field, you can combine more traditional text search with faceted browsing. And if you tick the Include whole threads check box, Seek groups the messages into threads, giving you a better overview of your correspondence.

You can easily rearrange facets in the Seek panel by using drag and drop. You can also remove some of the default facets and add more facets by choosing them from the list of available facets. So, if you don't use tags, you can replace the Tag facet with something else; for example, the Priority facet.

Seek also boasts an impressive visualization feature, which is based on another nifty tool from the SIMILE project called Timeline. Select Visualize from the drop-down list in the Results pane, and Seek maps the email messages on a visual timeline. At least this is how it's supposed to work in theory. In practice, however, I couldn't make this feature work. Every time I tried it, the extension threw an error message complaining about an unresponsive script.

Another of Seek's weak points is that the extension performs indexing every time you select a folder. It's not a big problem if the folder contains a couple of hundred messages, but it can become an issue if you have thousands. For example, on my machine, it took Seek 1 minute and 12 seconds to index a folder containing 10,338 emails. While this is pretty fast, it's still quite a nuisance if you switch often between different folders. To work around this problem you can enable Seek only when needed, and then deactivate it when you are done searching, by pressing the Disengage link.

Despite these drawbacks, Seek is by far the most impressive and innovative extension for Thunderbird out there. If you want to beef up your email client with powerful search capabilities, Seek is worth a try.

Every Monday we highlight a different extension, plugin, or add-on. Write an article of less than 1,000 words telling us about one that you use and how it makes your work easier, along with tips for getting the most out of it. If we publish it, we'll pay you $100. (Send us a query first to be sure we haven't already published a story on your chosen topic recently or have one in hand.)

Baker College wins National Collegiate Cyber Defense Competition

By Joe Barr on April 21, 2008 (4:00:00 PM)

Baker College of Flint, Mich., defeated defending champion Texas A&M University and four other regional winners from across the country to capture the third annual National Collegiate Cyber Defense Competition, which concluded in San Antonio, Texas, over the weekend. Texas A&M finished a close second, and the University of Louisville took third. Also competing for the championship were the Community College of Baltimore County, Mount San Antonio College of Los Angeles County, and the Rochester Institute of Technology.

Hosted by the Center for Infrastructure Assurance and Security (CIAS) at the University of Texas at San Antonio (UTSA), the event pits six regional winners, each given a similar small enterprise network to protect, against a team made up of experienced security professionals dubbed the Red Team, a.k.a. Team Hilarious.

Teams are scored on how well they protect their identical networks, made up a Cisco router and five servers: Windows 2003 running Internet Information Services, Windows 2000 running DNS, Solaris X86 running Apache and OpenSSL, Gentoo running MySQL and NFS, and BSD running Sendmail. Team workstations can run Vista, Windows, Fedora, or BSD, as the team prefers. Teams are required to provide SMTP, POP3, HTTP, HTTPS,and DNS services throughout the competition, and outages on any of those services result in deductions from their score. At specified times, the teams are also asked to bring up FTP, SSH, RDP, and VNC services, in accordance with the 2008 competition rules.

In addition to the attackers (the Red Team) and the defenders (the Blue Teams), there is also a White Team. The White Team acts as the overall network operations center, observers, and as communications center. All requests for information, assistance, and problem reporting by the competing teams go through the White Team; teams are not allowed direct communication with the outside world except for publicly available information and software available on the Internet. The White Team also delivers in-competition requests for new services and scores the teams' performance.

The entire event took place at the San Antonio Airport Hilton hotel, and each team (Red, White, and each competing Blue team) had its own private, closely guarded room. A White Team observer was present in each competing team's room for the entire competition.
Team Hilarious

Red Team captain Dave Cowen has a jovial face and a pirate's beard. When his laughter could be heard in the hall outside the Red Team room, collegians winced, because they knew that another server has just fallen prey to the Red Team's relentless attacks.

The other Red Team members (first names only) Luke, Ryan, Evan, Jacob, and Leon are all professionals in the security industry. On Friday, the first day of the competition, the Red Team had the adrenaline of the hunt, the chase, the pursuit of hapless quarry, in the air, as team members sat around the conference table, staring into the screens of their laptops, some using two laptops at once, and sharing information as they gleefully began probing the target networks for weaknesses and mapping IP addresses to specific configurations.

One of the first remarks heard after the competition began was, "Interesting, the Solaris exploit from last year still works." That was followed shortly by Dave Cowen announcing "OK, professionals, we need a local Solaris 5.10 exploit for privilege escalation."

In addition to a few members of the press, the Red Team room was also visited by various federal agents. A contingent from the Secret Service was present all weekend. Three black-suited gentlemen claiming to be from the FBI were present Friday. Defense Information Systems Agency agents were present as part of the competition infrastructure, and among their other duties, helped escort journalists from room to room during the event.

The mood in the Baltimore County Community College Blue Team room Friday afternoon was in stark contrast with the lightness and laughter heard in the Team Hilarious room. All seven team members were focused on the job at hand, which was to begin securing the network they found running at the start of the competition. Voices were muted, there was no idle chatter, and everyone was busy at whatever task they had been assigned.

Teams are allowed to modify the configurations as they see fit during the event, so long as they follow the rules and provide the required services. The configuration itself seems to have been a weak spot for defending the networks, and at the end of the competition on Sunday, Cowen said that you reach a point where the configuration is more important than the supply of exploits available to attackers. He made that remark not long after hacking a team's Web server so that it displayed their credit card database as its homepage during the last half hour of the competition.

A two-hour awards luncheon took place shortly after the end of competition Sunday morning. There were speeches by US Representative Ciro Rodriguez and Cornelius Tate, the brand-new Director of the DHS Cyber Security Division, prior to announcing the winners. This year's competition was the closest ever, with three teams in a virtual tie after the second day, and Baker edging defending champion Texas A&M by the slimmest of margins at the end. Whether they took home the gold or not, all the teams were made up of bright, skillful students, and given the presence of two community college teams in the final six, it's obvious that the size of the school is not as important as the skill of its students in the world of cyber defense.

Baltimore County Community College, the only team with a female competitor, and Mount San Antonio Community College in Los Angeles, proved that network security skills are not the exclusive domain of larger, better-known institutions. Their presence at this national competition is roughly the equivalent of a community college basketball team making it to the NCAA's Final Four, and both schools and students deserve kudos for going head to head against teams from much larger schools, especially since those schools may include two graduate students on their team.

Dr. Gregory White, director of the UTSA CIAS, one of the founders of the original competition when it was held on a regional basis rather than nationally, explained there is a large network and computer security population in San Antonio, primarily because the Air Intelligence Agency is located there. UTSA was a logical place to become an academic center for computer and network security. That led to it become the first Texas university to be designated as a "Center for Academic Excellence in Information Assurance Education" by both the DHS and the National Security Agency, and it currently offers bachelor and masters-level degrees in information security from several of its schools.

Sponsors for this year's event included the AT&T Foundation, DHS, Cisco Systems, Acronis, Northrop Grumman, Accenture, the Information Systems Security Association, Core Security, our sister site ThinkGeek, Code Magazine, and Pepsi. White said that more sponsors are needed for future competitions in order to do all the things CIAS wants to accomplish.

Wednesday, April 16, 2008

What's the right filesystem for your portable backup drive?

By Nathan Willis on April 16, 2008 (7:00:00 PM)



So you just bought an external hard drive for backups. Now, with what filesystem should you format it? Ext2? FAT32? No matter which one you choose, there are trade-offs to consider.

You face the same choice whenever you buy a USB thumb drive, but for a backup drive, a lot more is at stake. Those backups have to be there and be reliable when disaster strikes. On the one hand, you need to preserve your data and your metadata, so not just any filesystem will do. But on the other, if you're not at your home base, you need to be able to access it from anywhere, so you can't be too obscure.

Back in February, my streak of never needing to restore from backup came to an end (though my no-hard-drive-failure streak is still running strong at 11 years). Filesystem corruption zapped some work from my laptop while I was on the road. It was not a sizable amount in the grand scheme of things, but inconvenient in that it happened while I was away from home base. Once I returned, I started shopping for a pocket-sized external hard drive to carry around to deal with such occasions in the future.
Evil number one: Old and FAT

If a drive comes formatted out-of-the-box, it likely uses FAT32 -- the old Microsoft classic guaranteed to be readable on any computer modern enough to have the right physical connector. You can use FAT32 for a backup drive and be safe in the knowledge that you can retrieve files from it on a Linux, Windows, or Mac OS X system. But FAT32 limits individual file sizes to 4GB, a restriction way too small for video editing and increasingly for DVD ISOs and virtual machine images. More importantly, FAT32 does not support Unix file permissions, adding hassle to restoring ownership and write permission to backup recovery.

NTFS overcomes those technical limitations, but its write support under Linux and OS X is, at best, a problematic hurdle. That makes it risky: a backup system that is a daily hassle to write to is a backup system that doesn't get used. Getting out of that habit is the last thing you want to do.
Evil number two: Nobody understands me

Ask for advice on formatting an external drive in a Linux forum, and the traditional answer is ext2. It is the standard native-to-Linux filesystem -- a reliable, no-nonsense choice for well over a decade, and thanks to its compatibility with successor ext3, unlikely to disappear from Linux in the foreseeable future.

All of which makes ext2 a good option as long as you live in a perfect world, in which no matter where you go there is a Linux computer handy. But if you occasionally find yourself in a room with one of the 90% of the world's PCs that run Windows, you will find it unable to read ext2 partitions.

There are two actively developed projects to bring ext2 support to Windows. The Ext2 Installable File System (IFS) implements system-level read/write access for Windows systems from NT 4.0 to Vista. It is regarded as generally stable, although it does not respect Unix file permissions, and cannot read Logical Volume Manager (LVM) volumes. It is freeware, but not open source.

Ext2fsd is GPL-licensed, and supports many of the same features as Ext2 IFS. It does not run on Windows NT or on Vista, though, and its developer cautions that it is not to be regarded as stable. It does not support LVM, but the latest development builds do support reading ext3 journals.

On Macs, the only option is ext2fsx, an open source project that appears to have gone dormant late in 2006. It was last verified to support OS X version 10.4. Recent rumblings among a new set of developers in the project's discussion forum indicate a growing interest in resuming development, so 10.5 users may not be left out in the cold for long.
Calculating which is the lesser

Not a particularly heartening choice, is it? Everything can read FAT32, but you'll lose your file permission settings and will have to take special measures to split up oversized files. Ext2 will be a breeze to use in Linux, but you will have to find another Linux system to read it.

You can address the latter point by always carrying around a bootable Linux CD or flash drive, or perhaps keeping a copy of the ext2 tools for Windows and Macs on portable storage as well. That is a workable solution, although it adds complexity and additional points of failure. Scratch or lose your portable Linux image and you are back to square one. Plus, the ext2 solutions for the proprietary OSes all require administrator privileges to install -- something you may not have access to in an emergency disaster recovery scenario.

But then again, how many files do you actually have that exceed 4GB in size? Are file ownership and permissions for data (i.e. not system) files really that difficult to restore, compared to the time required to reinstall the OS itself after a hard drive failure?

What I have done in the past is use ext2 for the bulky 3.5-inch backup disk attached to my desktop system at home, and FAT32 on the pocket-sized portable drive I take on the road. Neither approach is a perfect solution, but considering the different steps I'd likely have to take to recover from a failure in both circumstances, at least that approach simplifies a quick recovery in both.

But my choice certainly isn't the only reasonable way to look at it. What strategy do you use, and when disaster strikes, how has it fared?

A year later, sales of Linux on Dell computers continue to grow

Sales figures not released, but program is thriving, Dell says

April 14, 2008 (Computerworld) As Dell Inc. approaches its one-year anniversary of selling laptop and desktop computers preloaded with Ubuntu Linux, the company is continuing to expand the fledgling program to new computer models and markets.

In interviews at Dell's Parmer campus north of Austin last week, four Dell representatives said sales of the Linux-loaded machines are encouraging.

Though they declined to give sales figures for the Linux-equipped machines, the Dell officials were adamant in saying that the program wouldn't be continuing or adding new models if the sales figures were not adequate.

"A [sales] number is not going to validate it as much as our actions to date," which include adding new models and configurations, said company spokeswoman Anne Camden.

Dell first offered Linux on its machines in 1999, when it installed Red Hat Linux on a selection of Dell servers, said Matt Domsch, the company's Linux technology strategist in the CTO's office. A short time later, Dell tried selling consumer-focused laptops with Red Hat Linux, but the effort was not sustained due to inadequate demand.

Dell has continued to sell enterprise servers with Linux since that 1999 debut, Domsch said. The recent Linux-on-Dell program for laptops and desktop machines, however, has been gaining momentum, he said. "If the program wasn't successful, we wouldn't be able to continue it," Domsch said.

The machines can be configured and ordered at the company's Dell and Linux Web page. In January, Dell announced another Linux-loaded laptop machine, with a host of high-performance features.

The Linux-on-Dell idea emerged in February 2007, after CEO Michael Dell debuted a new company-hosted blog called IdeaStorm, where customer could offer ideas and input on prospective new products and services. More than 100,000 people posted comments about wanting to see the company sell computers straight from the factory with Linux preloaded.

Ten weeks later, in May last year, Dell announced that it would begin selling Linux-loaded machines to consumers and businesses.

So far, Dell hasn't advertised Linux on its machines in consumer advertising campaigns; rather, it's relying on open-source enthusiasts seeking out the machines on the Dell site. Those people are often the same ones who suggested the combination in the first place.

"Those who care, know" that Dell is selling the machines, said Russ Ray, a Dell product marketing representative. "If you know Linux, you're going to know we sell Dell products with Linux on them."

Consumer-focused ads featuring Linux on Dell could appear at some point, Ray said, but it's not critical to the company. "I think that will occur when there's a reason for that to occur," he said. "We would like to get to a place where to some degree, it really doesn't matter" to consumers which operating system is on the machines.

For business users, there has been a growing interest in the Linux-on-Dell program, Ray said. "We have had many inquiries" regarding cost savings, infrastructure needs, desired applications and compatibility with existing Unix systems, he said. "It's the stuff that you would assume."

John Hull, manager of Dell's Linux engineering department in its Global Solutions Engineering division, said that two years ago, he would never have expected such a program to get started.

The Linux-on-Dell program has made Dell machines more desirable for users who are seeking alternative operating systems to Microsoft Corp.'s Windows, Hull said. "People might have looked at other brands previously but are now looking at Dell because of Linux," he added. "We started in the big markets, where they were asking the loudest, and we went from there."

The company has employees who monitor a wide variety of blogs, looking for discussions involving consumers who are seeking information on Linux, laptops and desktops, Camden said. The employees identify themselves and post replies pointing people to Dell and its Linux offerings. "They evangelize it on that kind of level," she said.

Tuesday, April 15, 2008

The iPhone SDK and free software: not a match

By Nathan Willis on April 15, 2008 (7:00:00 PM)

Apple's recently released a software development kit (SDK) for the iPhone, but if you were hoping to port or develop original open source software with it, the news isn't good. Code signing and nondisclosure conditions make free software a no-go.

The SDK itself is a free download, with which you can write programs and run them on a software simulator. But in order to actually release software you've written, you must enroll in the iPhone Developer Program -- a step separate from downloading the SDK, and one that requires Apple's approval.

Since its release, many in the free software and open source community have debated whether the terms of the iPhone Developer Program are compatible with common licenses such as the GPL. In a search for a definitive answer, we asked the principal parties themselves. Apple did not reply to our inquiries, but Free Software Foundation (FSF) Licensing Compliance Officer Brett Smith was happy to discuss the licensing issues in depth.

First, let's look at the SDK and the developer program that accompanies it.

To download the SDK, you must first sign up for a free Apple ID -- an existing Apple Developer Connection, .Mac, or iTunes Store account will do -- and use it to register with Apple as an iPhone Developer. The SDK by itself won't let you create applications that run on actual iPhone devices, though. To do that, you must enroll in Apple's iPhone Developer Program, for a fee starting at $99.

For the time being, Apple is not accepting all applicants. Currently only US residents age 18 and up are eligible, and Apple is selecting a limited number of applicants. Who gets approved and speculation as to why are popular discussion topics on Apple-centric Web sites.

If your application is approved, a document called the Registered iPhone Developer Agreement lays out the terms and conditions under which you can create iPhone apps. It is those conditions that conflict with free software licenses like the GPL.
Problem: code signing

The iPhone Developer Program establishes Apple as the sole provider of iPhone applications. You can choose not to charge for an app you author, but the iTunes Store is the only channel through which it can be delivered to end users and installed. Apple signs the apps it approves with a cryptographic key. Unsigned apps won't run on the iPhone.

This condition conflicts with section 6 of the GPLv3, the so-called "anti-TiVoization" provision. In particular, it prohibits Apple from distributing a GPLv3-licensed iPhone application without supplying the signing keys necessary to make modified versions of the application run, too.

Thus, you as the developer could attempt to place your code under the GPLv3, but Apple could not distribute it -- and since only Apple-signed programs will run, no one else could distribute it either.

The FSF's Smith says the fact that the author of the program (i.e., you) and the distributor of the binary (i.e., Apple) are unrelated entities makes no difference. "If a program is meant to be installed on a particular User Product, GPLv3 imposes the same requirements about providing Installation Information whether the software is directly installed on the device or conveyed separately."

Because of the GPL's viral nature, any app that is derived from other GPLv3 code must be licensed in a way that preserves GPLv3's code signing requirement. But there are still projects that have chosen to retain earlier licenses, such as GPLv2, and prior versions of the GPL did not include the code signing requirement. Thus you could in theory place your work under GPLv2, as long as it was either entirely original or derived only from code licensed under GPLv2 and earlier. But the result still would not qualify as free software, since no one could alter your source code and run the modified result on their phone.

As Smith explains, "partially free" software is still non-free. "The Free Software Definition is not a checklist, where software that fulfills three of the criteria is somehow 'better' than software that only meets two. The Free Software Definition lists the bare minimum rights you need to make sure that the software works for you, instead of somebody else. If you've been deprived of any single one of those rights, whether by a license, a patent, code signing, or any other means, then you've lost your freedom. You no longer control the computer; it controls you. Getting some source is a small consolation prize for losing your own autonomy."

But the aforementioned situation is only an option if Apple would allow you to release the source....
Problem: nondisclosure

Unrelated to the code signing complication is another issue that restricts your choice of licenses. The Registered iPhone Developer Agreement is a contract between the developer (you) and Apple. If you violate any of the terms and conditions of the agreement, you lose your right to use the coding utilities in the SDK and all of its information and documentation.

Section 3 of the document is a nondisclosure agreement (NDA). It defines "all information disclosed by Apple to you that relates to Apple's products, designs, business plans, business opportunities, finances, research, development, know-how, personnel, or third-party confidential information" as "Confidential Information" -- excluding specific information that is available elsewhere. You must agree not to "disclose, publish, or disseminate" any of the aforementioned Confidential Information, and not to use it "in any way, including, without limitation, for your own or any third party's benefit without the prior written approval of an authorized representative of Apple in each instance."

Those broad restrictions may be standard issue for an NDA, but they constitute a binding agreement that trumps your usual right to place a license of your choosing on your source code. As Smith puts it, "If you agree to an NDA that prohibits you from sharing your program's source, then you cannot release that program under the GPL, or incorporate any GPL-covered code in it."

Publicly releasing source code that uses the iPhone APIs as documented in the SDK and Developer Program could easily fall under the definition of "disclosing," "publishing," or "disseminating" Confidential Information, as none of the iPhone APIs are documented elsewhere. A clearer word from Apple regarding what exactly constitutes "disclosing," "publishing," and "disseminating" would be helpful, but until the company makes such a clarification, the conservative interpretation is the safest.

You could ask Apple for permission to publish your source code, but in the absence of such permission, violating the agreement terminates your right to use the SDK and to publish your software, regardless of the license you choose.

Finally, the fact that currently only US residents age 18 and older can even sign up for the iPhone Developer Program disqualifies many free software developers right out of the gate. The US-only restriction will likely be lifted, just as the devices themselves have been rolled out country by country. But the Registered iPhone Developer Agreement is intended to serve as a binding contract, so the age restriction is certainly here to stay.

There are other restrictions on what iPhone applications are allowed to do, which some might consider barriers to free software. The limitations already discussed affect all apps, regardless of function.
Freedom!

Of course, the code signing and NDA hang-ups apply only to developers who sign up for the program. Reverse-engineer the iPhone and you can code to your heart's content. So long as you do not expose yourself to the official SDK, you can license your work however you want.

All of the third-party iPhone apps available up until now are the result of jailbreaking the devices, a pastime at least partly responsible for Apple's decision to create an iPhone SDK in the first place.

To its credit, the Apple development community seems to recognize the limitations of the iPhone SDK as they apply even to non-free software, and is writing about them.

As the iPhone SDK Era begins, it is interesting to look back at what the FSF had to say about the launch of the device itself. The FSF launched GPLv3 on the same day that Apple launched the iPhone, and used the event to address the restrictions placed on iPhone owners.

Executive Director Peter Brown described the device as "crippled, because a device that isn't under the control of its owner works against the interests of its owner." The document goes on to cite DRM locks and "TiVoization" as the principal problems.

In the months since those words were written, the TiVoization problem might have sounded abstract, but the details of the iPhone SDK make it crystal clear -- you cannot write free software for the iPhone, even if you want to.

Steve McIntyre elected Debian Project Leader 2008

The winner of the Debian Project Leader (DPL) 2008 election is Steve McIntyre, His term as DPL will extend for one year starting on April 17th, 2008.

The details of the results shall soon be up at the election page as per the results announcement email. You can read all three DPL 2008 candidate platforms at the election page.

You can also see a graphical output of the process at this page. This year, over 49% of developers eligible to vote sent their votes to the Condorcet system.

Organizations without a common understanding of authority and leadership can not survive in the long term. And those with direct democratic forms of participation do not tend to scale well and are noted for their difficulty managing complexity and decision-making, leading to failure.

The Debian Project community designed and evolved a solid governance system since 1993, having established shared conceptions of formal authority, leadership and meritocracy, limited by defined democratic adaptative mechanisms.

Since its foundation in 1993, the Debian Project had four phases of its governance system and five conceptions of leadership and meritocracy.

Between 1997 and 1999, the community drafted a Constitution to formalize leadership roles, rights and responsibilities. It was ratified using itself, as a test case.

The governance system was validated in 2006, when a crucial conflict was resolved within the approved framework.

The Debian Project Leader 2008 election is another confirmation of the suitability of the framework to the Project objectives, defined by the Debian Social Contract, the Debian Constitution, ratified at Debian Policy, and one of the reasons why its developers are so committed.

The evolution of the Debian Project's system of governance was thoroughly studied by Siobhán O'Mahony, Assistant Professor at the University of California's Graduate School of Management, and Fabrizio Ferraro, General Management Professor at IESE. You can read more about it at this page , which includes a link to the complete scientific study with detailed research data and analysis.
About the Debian Project

Debian GNU / Linux is one of the free libre operating systems ( GNU/Linux, GNU/Hurd, GNU/NetBSD, GNU/kFreeBSD), running 18733+ officially maintained packages on 15 hardware platforms, from cell phones and network devices to mainframes and supercomputers, developed by more than two thousand volunteers from all over the world who collaborate via the internet on the Debian Project.

Debian's dedication to Free Libre Open Source Software, its constitutional non-profit nature, its open and meritocratic development model, organization and social governance make it a first among free libre operating system distributions.

The Debian project's key strengths are its volunteer base, its dedication to the Debian Social Contract and the Debian Constitution, and its commitment to provide the best operating systems attainable, following a strict quality policy, working with an established QA Team and helpful users reporting bugs, suggestions, exchanging ideas, and registering experiences.

You can help Debian Project without joining it and even not being a programmer, or being a development and or service partner company or institution at the Debian Partner Program, or simply making various donations to the Debian Project.

Debian Project news, press releases and press coverage can be found from the official Debian wiki page. PR contact at debian-publicity list.

File Synchronization with Unison

April 14th, 2008 by Mike Diehl in * HOWTOs

Keeping the files on multiple machines synchronized seems to be a recurring problem for many computer users. Until I discovered Unison (http://www.cis.upenn.edu/~bcpierce/unison/) I never really had a completely satisfactory solution.

What we'd like to be able to do is efficiently keep two or more servers completely synchronized with each other no matter what gets changed on any of the servers. In the simplest case, we have a production server and a backup server that we need to keep in sync. We might have a cluster of servers used in a load balancing configuration. In the worst case, we might have a group of computers where changes are occurring on any or all of the devices. Consider the case where we have a computer at the office, a laptop, and a work computer at home. We want to be able to work from any computer at any time.

One solution is to simply use scp (http://www.openssh.com/) to copy the files from one computer to the other or others. This solution requires that we designate one computer to be the “master” and only changes that occur on the master computer are propagated to the other, slave, computers. Besides a lack of flexibility, this solution has one serious drawback; it copies every file from the master to each slave computer, every time the synchronization process is started. On a slow network link, or a large directory structure, this often proves untenable.

A slightly better solution is to use rsync. (http://samba.anu.edu.au/rsync/) The rsync program only transfers those files that are different. In fact, rsync only transfers those parts of a given file that are different. This mechanism is quite efficient, but still suffers from the master/slave architecture that scp suffers from.

There are solutions that depend upon kernel services such as the FAM (http://oss.sgi.com/projects/fam/faq.html) or clustered filesystems like Coda. (http://coda.cs.cmu.edu/doc/html/index.html) These solutions, of course, require a kernel recompilation, which seems like a lot of work to simply keep a couple servers synchronized.

So far, unison is the simplest and most effective solution I've found. Unison will correctly synchronize two servers even if changes occur on both servers. If a change occurs in the same file on both servers, this causes a conflict, and unison will display an error message. File content as well as permissions and ownership can be synchronized. Unison even allows you to keep Linux machines and Windows machines in sync. For those of you who have slow network links, it's nice to know that unison works like rsync in that it only transfers those parts of a file that have been changed, when possible.

Installing unison is trivial. The package management system in most Linux distributions can automatically install unison for you. Otherwise, simply download the source and compile it. You will need Ocaml installed, though.

Unison can be configured to use a native network protocol, or to use OpenSSH in order to transfer files. The native protocol isn't authenticated, nor encrypted, so it isn't nearly as secure as the ssh configuration. I recommend using the ssh configuration and that's the configuration my example will use. For automated synchronization, you will probably want to setup certificate-based authentication for ssh. There are many easy-to-follow instructions on the Internet that describe how to set this up, so I won't cover that here.

Once you have unison installed, and ssh configured, it's time to start synchronizing! But first, we should discuss, briefly, how unison works, especially the first time it is run against a particular file repository. The first time you use unison on a file repository, the program makes a note of modification timestamp, permissions, ownership and i-node number for each file in both repositories. Then, based on this information, it decides which files need to be updated. The program stores all of this information in the ~/.unison directory. The next time unison is run on the file repository, changes are trivial to detect. Intuitively, you might expect that unison is examining the file's contents to see if the file has changed, but that isn't what is happening. If a files modification timestamp and i-node number change, the file needs to be updated. This is a very fast calculation and scales well, even on very large files.

Here is a quick example from one of my computers:

unison /home/mdiehl/Development ssh://10.0.1.56///home/mdiehl/Development/ -owner -group -batch -terse

This should all be on one line. I do a lot of software development and in this example, I'm using unison to synchronize the development directory from my Internet accessible server to my workstation on my private network. Even though this example is fairly intuitive, it doesn't get much more complicated than this, so let's take a closer look.

The example synchronizes /home/mdiehl/Development on my server to the same directory on my workstation who's IP address is 10.0.1.56. The ssh protocol is used for the file comparison and transfer. Since this is a bi-directional process, it doesn't matter where the script runs as long as the two machines can reach each other over the network; it's just more convenient to run my scripts on the server, but I could just as easily run this script from my workstation if I change the IP address in the script.

The “-owner” and “-group” parameters tell unison to attempt to synchronize the user and group ownership. You need to make sure that the owners and groups exist on all of the machines you intend to synchronize. For example, if you are syncing a directory owned by the user “bob,” who's uid is 500, you need to be sure that “bob” exists on every server. Otherwise, you will find that unison will create an entire directory structure owned by uid 500. This is messy, but easily resolved.

Since I run this example command from cron, I use the “-batch” parameter, which tells unison to not ask the user any questions, and simply do what it can if there are any conflicts. Similarly, the “-terse” parameter keeps unison from filling up my cron log with a bunch of unnecessary output.

When I run the example, above, I am presented with a list of updates that are being made between the two computers. The final lines are the most important, though:

UNISON finished propagating changes at 01:05:15 on 13 Apr 2008 Synchronization complete (8 items transferred, 0 skipped, 0 failures)

As you can see, 8 files needed to be transferred in order to synchronize the two servers. Fortunately, there were not problems, and all 8 files were transferred, and my two machines are back in sync. If there were files with conflicting changes, then we would see that in the “skipped” tally. If there had been file permissions or network problems, those would have shown up as failures. Either way, we'd want to go back through the log to find out what happened.

In the several years that I've been using unison, I've only had a few problems with it. As mentioned earlier, the most common problem stems from having conflicting file changes. For example, if you make a change to a file on one server and then change the corresponding file on the other server and the files don't end up being identical, unison sees that as a conflicting change and flags it. The way I usually resolve this problem is by deciding which version I want to keep and using the “-prefer” option to tell unison which version it should... prefer... when there is a conflict. In the example above, if I wanted to have the local version overwrite the remote version, I would add:

-prefer /home/mdiehl/Development

To the end of the command line.

The very first problem I had with unison was when I tried to synchronize two directories that had several tens of thousands of files in them. Unison simply ran out of memory. If I had one complaint about unison, it would be that I have to break large file repositories into smaller pieces in order to use unison to synchronize them. It doesn't seem to me that it should take that much memory to do the book keeping, but I can't argue with the fact that the tool works and I've never lost a file with it.

The unison website indicates that unison is no longer under active development. This is unfortunate, but it shouldn't dissuade you from using and trusting the program. I've found it to be quite mature and is still actively being supported via the unison mailing list. I've had a few occasions to ask for help on the mailing list and I've found the list be extremely helpful.

Unison is a very effective means of synchronizing servers. It can be used in a “star” topology to keep multiple servers in sync. I can also be used in a “ring,” or any other topology you might need. The documentation is quite extensive and well written. I hope you find it as effective and easy to use as I have.

Mike Diehl is a Linux Administrator for Orion International at Sandia National Laboratories in Albuquerque, New Mexico. Mike lives with his wife and two small boys. Mike can be reached via email at: mdiehl@diehlnet.com