Saturday, January 26, 2008

Foundation rewards Open Source business plans

€75,000 awarded to top three open source businesses

Ian Williams, vnunet.com 25 Jan 2008

The Open Source Business Foundation, a European open source network has awarded a total of €75,000 to the three winners of its annual Open Source Business Awards.

The winners were announced at the "Open Source Meets Business" Conference in Nuremberg where the foundation is headquartered.

The awards were created to recognise those companies, institutions and individuals who have been shining examples in creating viable business models around open source development.

First place, earning prize money to the tune of €50,000, went to Rapid-I, an open source data mining specialist from Dortmund.

Second prize, carrying €15,000, was taken by Open Bravo, an open source ERP company from Pamplona in Spain.

The third place and €10,000 were presented to brox IT-Solutions from Hanover, a specialist in data optimisation and the personalised tracking of information.

A special award in the shape of a quad-core processor PC was presented to the Freiburg-based Core Systems.

The decisions were made by a panel of 17 members, and the votes were unanimous. All award winners were commended as outstanding, innovative examples for the European open source sector.


Examining the XO Activities and Durability

Having thrown around a few initial impressions about the OLPC XOs, I thought I would take a more in-depth look at the user interface and some of the activities kids can engage in. And I have a couple of comments about their durability and adjustable screens.

Before I get into the Sugar user interface and activities, I'd like to revisit the issue of using the XOs outside, as well as talk about their durability. I took my XO outside in direct sunlight at high Noon, and read a colorful PDF file with no problems. The screen switches from color mode to black and white mode automatically. Or at least, I didn't do anything special to adjust the screen. I did get glare when I adjusted the screen to catch the sun head-on, but my assumption is that most students will probably adjust their screen so that there is no glare. Actually, even with some glare, I could read the text fairly easily.

As for their durability, I decided to abuse mine a little bit. I wasn't willing to risk damaging the machine outright, as I still have to work with a handful of children. That means I need a working machine. Still, a 30" drop from my dining room table onto a carpeted floor is a little risky. With the laptop opened up, ears extended, I gave it a bit of a nudge. I watched as it hit and tumbled. It fell screen-first, with the ears simply folding up as they hit the floor. Nothing broke. The screen did not even blink. I just picked it up and kept working. I also did it while it was turned off and closed up. It booted up just fine. If you're interested in supplying some of these for more serious crash testing, I'll be happy to put them through the ringer.

The Sugar UI's desktop consists of an XO figure in the middle of what I call a pineapple ring. This is surrounded by a vanishing frame. Just beneath the pineapple ring is a set of indicators for showing the battery and network connection (wifi signal) strength. The top of the frame gives you access to the network (neighborhood), groups, desktop, and the highlighted activity. The bottom frame gives you access to the various activities.

Launching the neighborhood tool from the top frame takes you to a page where you can see other XOs, wifi Internet connections, and other potential connections. From there, you can normally click one to connect to it. As I understand it, you have to choose between connecting to the Internet or to the mesh network with other XOs. In my experience, you can still connect to other users while connected to the Internet, and it appears that the idea behind the mesh is that at least one of the XOs (or the school server) will have Internet access, with the others feeding through that.

After initially connecting manually to my home wifi (just by clicking on the connection from the Neighborhood view), I can now simply boot up and hit the web, without any further manual connection efforts. As I was saying, I was able to share the Write activity between the two laptops while one was connected to the Internet. Groups apparently are the active connections with other XOs. Thus, you can choose to share an activity with other XO users in your group (people you are already sharing activities with), or with those in the larger neighborhood.

The chat activity and mozilla-based web browser are no-frills tools, but do their jobs well enough. My biggest complaint about the chat client is that it does not seem to include any kind of contact or favorites list. Instead, you have to first start the chat client, share it, and then go back to the neighborhood (or group) view, to invite someone to chat. The browser gives you a bookmark function, but I have yet to figure out how to access saved bookmarks, other than through the journal. I assume this is related to the security concerns, much like the inability to open documents from within an application. Still, it's a bit aggravating for those who are at least somewhat familiar with how most web clients work. Of course, to those who do not, it makes no difference.

On the bright side, students can access most web content, download and read PDF files, listen to Ogg-Vorbis feeds, and so on. And that chat client does work. As I said in my previous post, I have my gripes, but they are mostly minor issues. These gripes are likely most important to people like me, who already have access to good technology. I really don't believe that children growing up with XOs will somehow feel traumatized by having to learn some new tricks when they do gain access to more commonly used programs.

The Abiword-based Write activity is fairly straight forward. It's difficult to gripe about a word processor that can save documents in RTF, HTML and plain text. It would be nice to have ODF capability, but for most educational situations involving young children, the format is largely irrelevant. What is important is whether children can learn to write/type - to spell and put sentences together. They need to know when and how to use emphasis, graphics and tables in writing. Write lets them do exactly that.

Of course, most people with access to modern technology might expect to see these children printing out their documents. Well, as expensive as ink is here, I can only imagine what it might cost elsewhere. And then there's the cost of the paper. In developing countries, it's much cheaper and easier to just let the children save their documents without printing them. Actually, I only use my own printer for printing downloaded forms or maps. I rarely print anything any more. One can always save the documents to a USB drive and print them from a computer with a printer. But you can install and configure CUPS if you really need to print. Most schools probably won't even give their students direct access to a printer - not even in here in the US.

I would like to finish up with the XOs multimedia capability. It's really cool to have a webcam, and to be able to take quick snapshots and record audio. I think it's important for children to have access to this kind of technology, since it will very likely become ubiquitous over time. While snapshots look pretty good, in my opinion, I think the video recording quality is fairly low. It works, but just barely. While it is by no means a show-stopper for me, it is a bit disappointing.

The only issue I have with the audio recording capability is that children had better get a microphone if they intend to record their voices. Well, they could shout, which is about what you have to do to be able to hear yourself on playback. If schools rely on the built-in mic, I hope they also provide the teachers with plenty of headache medicine. Again, not really a show-stopper, but I warned that I do have my gripes about these XOs. In the grand scheme of things, it's great that they have given the children the ability to record their voices. I just hope they can improve the multimedia capabilities in due time.

I have discussed using the XOs in sunlight and their durability, as well as the first four activities included with these machines. I will cover some more XO activities in my next post. Stay tuned!

Get Information About Your BIOS / Server Hardware From a Shell Without Opening Chassis ( BIOS Decoder )


biosdecode is a command line utility to parses the BIOS memory and prints information about all structures (or entry points) it knows of. You can find out more information about your hardware such as:
=> IPMI Device
=> Type of memory and speed
=> Chassis Information
=> Temperature Probe
=> Cooling Device
=> Electrical Current Probe
=> Processor and Memory Information
=> Serial numbers
=> BIOS version
=> PCI / PCIe Slots and Speed
=> Much more

biosdecode parses the BIOS memory and prints the following information about all structures :
=> SMBIOS (System Management BIOS)
=> DMI (Desktop Management Interface, a legacy version of SMBIOS)
=> SYSID
=> PNP (Plug and Play)
=> ACPI (Advanced Configuration and Power Interface)
=> BIOS32 (BIOS32 Service Directory)
=> PIR (PCI IRQ Routing)
=> 32OS (BIOS32 Extension, Compaq-specific)
=> VPD (Vital Product Data, IBM-specific)
=> FJKEYINF (Application Panel, Fujitsu-specific)

In this tip you will learn about decoding BIOS data (dumping a computer’s DMI ) and getting all information about computer hardware without rebooting the server.

More about the DMI tables

The DMI table doesn’t only describe what the system is currently made of, it also can report the possible evolutions such as the fastest supported CPU or the maximal amount of memory supported.

dmidecode - Read biosdecode data in a human-readable format

Data provided by biosdecode is not in a human-readable format. You need to use dmidecode command for dumping a computer’s DMI (SMBIOS) table contents on screen. This table contains a description of the system’s hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can retrieve this information without having to probe for the actual hardware.

Task: Display information about IPMI Device

# dmidecode --type 38
Output:

# dmidecode 2.7
SMBIOS 2.4 present.

Handle 0x0029, DMI type 38, 18 bytes.
IPMI Device Information
Interface Type: KCS (Keyboard Control Style)
Specification Version: 2.0
I2C Slave Address: 0x10
NV Storage Device: Not Present
Base Address: 0x0000000000000CA2 (I/O)
Register Spacing: Successive Byte Boundaries

Task: Display information about PCI / PCIe Slots

# dmidecode --type 9

# dmidecode 2.7
SMBIOS 2.4 present.

Handle 0x000E, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIX#1-133MHz
Type: 64-bit PCI-X
Current Usage: Available
Length: Long
ID: 1
Characteristics:
3.3 V is provided

Handle 0x000F, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIX#2-100MHz
Type: 64-bit PCI-X
Current Usage: Available
Length: Long
ID: 2
Characteristics:
3.3 V is provided

Handle 0x0010, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#3-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided

Handle 0x0011, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#4-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided

Handle 0x0012, DMI type 9, 13 bytes.
System Slot Information
Designation: PCIE#5-x8
Type: Other
Current Usage: Available
Length: Other
Characteristics:
3.3 V is provided

Task: Find out Information about BIOS

# dmidecode --type 0
Output:

# dmidecode 2.7
SMBIOS 2.4 present.
Handle 0x0000, DMI type 0, 24 bytes.
BIOS Information
Vendor: Phoenix Technologies LTD
Version: 6.00
Release Date: 01/26/2007
Address: 0xE56C0
Runtime Size: 108864 bytes
ROM Size: 1024 kB
Characteristics:
PCI is supported
PNP is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
EDD is supported
3.5"/2.88 MB floppy services are supported (int 13h)
ACPI is supported
USB legacy is supported
LS-120 boot is supported
ATAPI Zip drive boot is supported
BIOS boot specification is supported
Targeted content distribution is supported

Understanding BIOS keywords

dmidecode --type {KEYWORD / Number }

You need to pass dmidecode following keywords:

  • bios
  • system
  • baseboard
  • chassis
  • processor
  • memory
  • cache
  • connector
  • slot

All DMI types you need to use with dmidecode –type {Number}:

# Type Short Description
0 BIOS
1 System
2 Base Board
3 Chassis
4 Processor
5 Memory Controller
6 Memory Module
7 Cache
8 Port Connector
9 System Slots
10 On Board Devices
11 OEM Strings
12 System Configuration Options
13 BIOS Language
14 Group Associations
15 System Event Log
16 Physical Memory Array
17 Memory Device
18 32-bit Memory Error
19 Memory Array Mapped Address
20 Memory Device Mapped Address
21 Built-in Pointing Device
22 Portable Battery
23 System Reset
24 Hardware Security
25 System Power Controls
26 Voltage Probe
27 Cooling Device
28 Temperature Probe
29 Electrical Current Probe
30 Out-of-band Remote Access
31 Boot Integrity Services
32 System Boot
33 64-bit Memory Error
34 Management Device
35 Management Device Component
36 Management Device Threshold Data
37 Memory Channel
38 IPMI Device
39 Power Supply

Display Power supply information, enter:
# dmidecode --type 39
Display CPU information, enter:
# dmidecode --type processor
Read man page for more information:
$ man dmidecode

Can Sun execute on its open source strategy?

Posted by Dana Blankenhorn @ 9:01 am

jonathan Schwartz at supernova 2007 by dan farberThe purchase of mySQL completes Sun’s transformation into an open source company.

It is, as the poker players say, “all in.” (Picture by our own Dan Farber.)

Now the challenge is to execute. Jonathan Schwartz will be judged on his open source execution just as Carly Fiorina was judged on her absorption of Compaq. (We already know he has better hair.)

The question is, will open source be judged by the execution of Sun’s strategy?

Many lazy analysts will say, “yes, of course.” Sun has transformed itself around open source. It has put its hardware and software licenses under the GPL. It has thrown $1 billion at an open source start-up.

But not so fast. Execution is a complex business. Two companies may have the exact same strategy, one succeeding and the other failing, so what does that result say about the strategy? Nothing.

Sun has made its own tweaks to the strategy. It controls the open source projects its life depends upon. It has no Plan B, no second set of businesses it can rely upon if this fails.

These facts are a notable contrast with IBM, which has already proven it can execute on an open source strategy. Both companies have lost value over the last year, but IBM is down just 10%, while Sun is down nearly one-third.

So the pressure is on at Sun. But we should not confuse its fate with the fate of open source in general.

HP Launches Open Source Governance Initiative

By Nathan Eddy, CMP Channel
2:20 PM EST Fri. Jan. 25, 2008

Hewlett-Packard (NYSE:HPQ) launched two Web sites this week to help companies more effectively govern free and open source software (FOSS) code. The company hopes the the tools will help enterprise customers mitigate risks and recognize the business benefits of FOSS.

"We've seen a critical mass of questions out of the industry, and increasing need for us to begin this discussion and make some contributions to moving open source forward," says Doug Small, HP's worldwide marketing director for open source and Linux. "We think the time is right to do this as open source becomes more pervasive."

The two sites, FOSSology.org and FOSSBazaar.org, each offer different tools for companies looking to develop a satisfactory strategy for governing open source software. FOSSology is a free, downloadable toolset designed to help users address deployment issues like FOSS acquisition, tracking and licensing. FOSSBazaar is a source for businesses to join discussion groups, read white papers and educate themselves on the implications of using open source software.

Small says he hopes VARs will use these tools to create new opportunities for their companies and their clients. "The combination allows a channel partner to get both the tools and the best practice to bring solutions to those customers," he says. "[VARs] can now get these resources and build a practice around it."

FOSSology and FOSSBazaar are the accumulation of seven years of effort by HP, which found it needed a system of governance for the deployment of open source projects within the company. Small says he feels open source community is gaining considerable traction and says it reflects a maturation of the open source market beyond Linux.

"As customers get more comfortable with open source they deploy it more," he says. "I think resellers who are advanced in open source are going to take a look at these sites and find the basis for some interesting service opportunities."

Monday, January 21, 2008

IBM Bringing Lotus Notes To iPhone?

Posted by: Arik Hesseldahl on January 21

The iPhone’s biggest weakness is its lack of support for enterprise email platforms like Microsoft’s Exchange and IBM’s Lotus Notes. It looks like that’s changing. Reports are circulating that IBM is close to announcing a version of Notes for the iPhone, and based on what I’m hearing that is pretty close to the mark, though an announcement may not come this week, as has been reported elsewhere.

The main importance is that iPhone owners who want to access their corporate email from the device will be able to do it with the full support of their corporate IT departments, which to now have been skittish about supporting the iPhone for a variety of reasons, most of which can be traced nothing more than uncertainty, and/or simply not wanting to support another wireless device.

InformationWeek reported that a version of Lotus for the iPhone would be announced at the Lotusphere conference in Orlando this week. No formal announcement has yet come from IBM, and the conference is underway. I’m told by someone familiar with the situation that there likely won’t be an announcement. More likely the news will come when Apple formally takes the wraps off the iPhone software development kit next month.

It’s pretty clear however that IBM is working more closely with Apple, and it would make perfect sense that the iPhone would be central to that effort. IBM-Lotus works closely with Research In Motion, and the iPhone, as Steve Jobs revealed last week, is second in popularity in the U.S. to the Blackberry, so IBM would be nuts not to to embrace the iPhone.

Meanwhile, Endgadet notes that AT&T has released an iPhone rate plan for business customers: $45 to $65 a month for unlimited data, visual voicemail with limits that vary on text messages (200 messages on the low end, unlimited texts on the more expensive plan.) Looks like corporations will soon have all they need to deal with requests for support from iPhone owners.

The next important move will be on Exchange. Yes you can access mail on Exchange servers when IMAP is enabled, but in my experience, it rarely is enabled. Direct Exchange support will go a long way toward making the iPhone a viable corporate device.

Unshaking and refocusing your photos

By Nathan Willis on January 21, 2008 (9:00:00 PM)

Whether by wind, vibration, or shaky hand, we have all taken blurry photos. But in the digital era, there is no need to despair -- you can remove shake and blur from your pictures after the fact. Several Linux-friendly utilities can help you.

Deconvolution is the general process that helps remove the effects of camera shake and blur. If you want to understand the math behind the process, start with the articles referenced at Wikipedia and you can find as much detail as you want. In a nutshell, it involves taking the Fast Fourier Transform of the image (which makes it easier to see the tell-tale signs of blurring), smoothing out the artifacts, then transforming the image back into its original form. It is a CPU-intensive process, but for a shaky image there is no better use of your MHz.

Unshake

The most straightforward way to get started is with Mark Cahill's Unshake, a small Java app with a lot of options that performs some helpful guesswork to speed things up. The latest release

is 1.5, which requires Java 2 or greater. Unshake is closed source, and the license prevents use for commercial purposes.

You can unpack the distribution anywhere on your system and launch the app by running ./unlaunch.sh. The interface has controls at the top, a status window beneath them, and a clickable file selection widget at the very bottom. When you open a photo, you start by resizing the photo window to show just the portion of the image that you are most concerned about. This is a critical step, and one that the interface does not explain to you. If you choose the wrong portion of the image, Unshake could misidentify portions of it as blur and overdo the correction.

When the window is properly set, click on the Estimate button and Unshake will give its best guess as to how long the process will take. You can adjust the blur severity and correction quality parameters and get new estimates. To see the results, just click the DeBlur button. Unshake will perform the image correction and open the result in a new window, which you can then save.

Unshake attempts to determine on its own how much correction to apply. The actual processing time can vary greatly depending on what it decides. The Time control allows you to allot more or less time to Unshake's process. By default it is set to "x1" meaning that Unshake will perform its transformations within the Estimate time. You can turn the Time parameter all the way to "x100" if you want the algorithm to perform more detailed analysis.

At the default settings, 40 to 50 seconds is a reasonable estimate time for a 1.3 megapixel image. Cahill notes on the Unshake site that he has never seen the algorithm come close to the "x100" time allotment -- he just put it in as an upper bound to make sure the program exits.

Refocus

The GIMP has two third-party plugins that do the same kind of correction as Unshake. Both are open source, although neither of them seems to be undergoing active development.

The first is Refocus, which is officially at version 0.9.0. That release is from 2003, though, and several other developers have released their own patches to bring Refocus compatibility up to modern versions of the GIMP. Richard Lemieux and Peter Heckert each maintain a page for their respective patched versions. Debian and Ubuntu include Lemieux's version as 0.9.1 in their package managers.

You start Refocus from within the GIMP, in Filters -> Enhance menu. The tool has five parameters and a preview window. It is not easy to explain the parameters without delving into deconvolution math, but you can play around with the settings. Roughly speaking, the Radius and Matrix Size controls determine how small a blur the algorithm will detect, and the Gauss, Correlation, and Noise controls affect the degree of smoothness and pixelization artifacts to allow.

In my tests, Refocus was drastically faster than Unshake, taking less than five seconds than to deblur an image that required 30 or 40 in the latter program. Of course, one of Unshake's selling points is that it performs its own automatic tweaking, thus producing higher-quality output.

Iterative Refocus

The other helpful GIMP plugin is named Iterative Refocus (or refocus-it). Its last release was 2.0.0, in 2004, but unlike Refocus no one that I am aware of is putting effort into making sure it remains compatible with newer versions of the GIMP.

Fedora provides the plugin as the gimp-refocus-it package, but users of other distros will probably need to compile from source. Fortunately, this is simple. Make sure that you have the GIMP development libraries installed, then execute the old ./configure; make; make install three-step. On Ubuntu 7.10, I found that I needed to create the directories /usr/local/share/help/C and /usr/local/share/help/images. You could also attempt to alter the Makefile to correct for this problem, but creating the directories is faster.

Once installed, Iterative Refocus also shows up in the Filters -> Enhance menu. Iterative Refocus has more options than Refocus, including the ability to specify the direction and size of the motion blur in the image. Knowing this allows you to better correct for the blur's ill effects -- but of course it is rarely simple. The worst kind of motion blur is not in a straight line, it is multi-directional or circular. Still, if you can zoom in on the image and determine which direction and how far it stretches, you are that much closer to eliminating it.

As its name implies, this plugin makes multiple iterations, correcting a little bit more on each pass. As with the other settings, it is often subjective how many iterations improve the image, but you can get a good feel for the quality of the output using the Preview button on a few.

No silver bullet

If you watch the movies, you have probably seen the impossibly accurate "computer enhancement" hand-waving that turns a blurry mess into a crystal clear mug shot or license plate for the hero to chase. Real-world image enhancement is not that good, but you may still be surprised at the level of quality a good Fast Fourier Transform and deconvolution can produce.

All three of these applications produce admirable results. Refocus is the fastest, and subjectively Unshake produces the cleanest results. It is unfortunate that among the three alternatives, one is not free software and the other two lack active maintainership. But since the math is well understood, maybe someone will pick up where the other programmers left off, and bring even better refocusing technology to the image editors of tomorrow.

Feds appeal loss in PGP compelled-passphrase case

Posted by Declan McCullagh

It's time to take another look at the intriguing case of United States v. Boucher, which may set the ground rules for whether or not criminal defendants can be compelled to divulge encryption passphrases.

When I last wrote about the Boucher case, the U.S. Department of Justice was refusing to comment on the matter. Here's my original article from last month for background.

The case arose because federal agents believe Boucher has child pornography on his laptop, and obtained a warrant to search it. But part of the hard drive was PGP-encrypted, and the Feds obtained a subpoena to force him to disclose (or even simply type in) his passphrase.

U.S. Magistrate Judge Jerome Niedermeier in Vermont rejected the subpoena on Fifth Amendment grounds--namely, that compelled disclosure of a passphrase amounted to self-incrimination. The Fifth Amendment says no person "shall be compelled in any criminal case to be a witness against himself."

The Washington Post, by the way, finally got around to writing about this (a month later) on Wednesday in a page one article. It quotes Boucher as saying that he likes to download Japanese cartoons and occasionally adult pornography, but that he does not seek to view child porn.

Now the Justice Department is filing a sealed appeal to the magistrate judge's decision to U.S. District Judge William K. Sessions. Sessions is a Clinton appointee, a former public defender who became a partner at the Middlebury, Vt. law firm Sessions, Keiner, Dumont & Barnes. He was part of the U.S. Sentencing Commission during the Clinton administration.

What's a bit odd is that, as far as I can tell, the Feds' appeal brief itself was filed under seal on January 2, and Boucher's reply brief in opposition filed on January 15 was also under seal. Considering that the original criminal complaint is public, and the magistrate judge's Fifth Amendment decision is public, there's no obvious reason why this extra secrecy is necessary. More on this as the case progresses.

Building a Home File Server

January 17th, 2008 by Phil Thane

Setting up a file server doesn't need to be complicated.

With three desktop machines (Kubuntu, Win XP and a testbed, which is currently running ReactOS) and a laptop (Xubuntu) in use at home, our IT is reaching small office proportions, and like many small offices, we run into file sharing problems. Peer-to-peer networking is fine when all the machines are on, but inevitably it happens that the file I want is on a PC that isn't running. Even worse, it be on my testbed machine that is currently in pieces or undergoing yet another upgrade. So, we need an always-on server that any of us can access any time, but if it is always on, it needs to be quiet, reliable and cheap to run.

These requirements rule out Pentium 4 (too hot and power-hungry) and Windows (needs rebooting too often). Fortunately, I just happen to have a Pentium III of no great distinction that sports a massive passive cooler, and I'm a bit of a Linux enthusiast. Apart from stability, Linux has several other advantages. It's free. It is almost totally virus-resistant, and it comes with excellent fire-walling and security features. And, it is easy to administer remotely, so once it's set up, the server doesn't need its own keyboard, mouse or screen, which saves expense, space, power and heat.

The Case

I'm planning on hiding this server in the loft, so frankly, what it looks like isn't an issue, the prime requirement of the case is that it is big and airy, allowing good air flow with nothing more than natural convection. Apart from choosing a big one, there are couple of things you can do to improve air flow. Remove any case fans if you're not actually using them, as they impede the flow through the vents. Remove unnecessary drives; they waste space and their cables impede air flow. All you need is a hard disk for your OS and files and a basic CD-ROM drive to load the OS. Remove unnecessary cables too, and tie up those you cannot do without to keep them out of the way. Remove surplus cards, as your file server will not need sound, 3-D graphics, USB, FireWire, SCSI or MIDI. On-board graphics or a small basic graphics card is all you need. Remember, warm air is less dense than cold and tends to rise, so make sure there is an inlet at the bottom and an exit near the top. And, if you do hide your server away somewhere, don't bury it in junk or put it in a confined space; let the air get to it.

Figure 1. The Case

There are acoustic damping kits available for PC cases that can kill a lot of noise from fans and disks, but as this server will run fanless, it's not necessary and can diminish the transfer of heat through the case. If the case is already padded with the stuff, remove it.

Figure 2. Strip out everything you don't need.

The PSU

The majority of PC power supplies have a fan that blows warm air out the back of the case, but there are fanless designs available and also some semi-fanless designs that run quiet most of the time but have a fan that kicks in when a heavy load on the PSU causes things to get warm. I'm using a 300W fanless FSP Zen model bought second-hand, but many similar models exist. By modern standards, 300W isn't much, but it's plenty for a Pentium III with basic graphics. Depending on your foraging and bargaining skills, the PSU may well cost more that the rest of the project put together, but it's worth it for silent running.

The CPU

My Slot 1 Pentium III originally was used in a slimline IBM desktop machine. (Remember the type you put under the monitor not under the desk?) It was fitted with a huge heatsink and had a plastic duct from there to the PSU air intake so that the PSU's fan sucked air over the CPU. Some years ago I re-housed it in a standard ATX mini-tower, but of course, the duct was completely the wrong shape, so I left it off and found the chip ran perfectly well without it. It's not good practice for a chip that's working hard, but in this server, it's going to be idle most of the time. It will just keep the OS and networking software ticking, and from time to time will pass an instruction to a hard disk -- not exactly stressful.

Pentium III base units are available from various on-line suppliers and local computer shops. If you buy one with a conventional small heatsink and fan, then around $10 US on eBay will get you a replacement Slot 1 processor with a large heatsink attached. You might even be able to sell the other one or keep it as a spare.

The Motherboard

If you go the economy route, buying an old base unit, the board that is fitted will be fine. If you buy one separately, don't get hung up on specs; performance is not really an issue. Having onboard graphics is useful. Fancy 3-D cards use more power and create more heat, but a basic old AGP card will do too. A modern Linux desktop distro needs about 512MB of RAM to run a GUI and graphics applications happily, but in this situation, it will manage with much less, the only irritant being that the actual installation process might be slow.

The Hard Disks

It's unlikely that a Pentium III motherboard will support SATA, but even an IDE drive will handle data faster than your home network, so that's not really a problem. I opted for a single 80GB drive from good-old eBay. When it starts to fill up, I'll add another. If you can afford it, buy more or a larger one. If you are really serious about keeping the server quiet, you could invest in flexible drive mounts that isolate the drives from the case.

The Operating System

Linux, obviously. The version of Linux isn't really an issue; almost any would do. I used Kubuntu. I chose it because KDE has built-in K Desktop sharing based on VNC for remote administration. It is a single CD download that's easy to install. Download the .iso file from Kubuntu's Web site (http://www.kubuntu.org), and burn it to CD-R or -RW. Whichever CD burner software you use, make sure you chose the option to burn an ISO image file rather than the regular Data CD option. If you don't, you'll have a very useful backup, but it won't boot!

Installing Kubuntu should be just a matter of inserting the CD, rebooting and following the on-screen instructions. However, older PCs such as the IBM used here, will not boot from CD. To get around this you need Smart Boot Manager -- a very small file that boots from a floppy and then lets you choose which disk to run from. Choose CD-ROM, and you're all set. Smart Boot Manager has to be written as an image file, and rather like making a bootable CD, simply copying the file to a floppy doesn't work. There are full instructions for both Linux and Windows users on http://linux.simple.be/tools/sbm, and a disk writing utility for Windows that is very easy to use. Incidentally, this is a useful disk to have for any OS that refuses to boot. The only downside to this is you need a floppy drive, so I put it back in and then removed it once the OS was installed.

Connect your server to your network and Internet router before you start. During installation, it will detect the connection and set it up automatically. It will ask a few basic questions about your location, language and time zone, but nothing taxing. Hostname can be anything, but I use Server. The basic distro includes some desktop software you won't really need, but just go along with the default selection for now. Kubuntu will ask you to set up a user during the installation. Something like System Manager or Administrator is sensible; save your real name for when you set up a normal user account later.

Once installation is complete, it is time to fire up Adept. Debian-based distros use the Apt package management system, and Adept is the KDE GUI that makes it easy to use even if you have an aversion to command-line work and text editing. Go to Start Menu -> System -> Adept. Browse the list of installed apps, and mark things like media players and graphics software for removal. If there is anything you are not sure about, leave it. Click Apply Changes to remove the selected apps. Now you can click the Full Upgrade button to update whatever is left. Finally, you need to install some networking applications. Find the following in Adept: samba and samba-common. Mark them for installation and commit changes.

Configuration

All configuration paths start at the KDE Control Center.

Unless you have a very unusual network card, Kubuntu will detect it and set it up using DHCP. This will work, but it makes remote administration tricky, as you have no way of knowing the server's IP address. Go to Network Settings, click Administrator Mode, and enter your password. Select the interface, and click Configure. Assuming your router is set up using 192.168.1.1, make the server 192.168.1.2. You can continue to rely on DHCP for your other PCs.

Figure 3. Change from DHCP to Fixed IP

Samba uses the Microsoft SMB protocol to interact with Windows shares. It talks to Samba on other Linux boxes too, making it the perfect way to set up a mixed network. There was a time when configuring Samba made strong sysadmins weep. These days, for home networking at least, it is very easy. Different configurations suit different circumstances, but for starters go to System Administration -> Users & Groups, and create a user for each person likely to want to put work on the server.

Now, go to Internet & Network -> Samba. In the Base Settings dialog, set a workgroup name (your hostname will be there already). Click the Shares tab, check that homes is already set (add if necessary), then select it and click Edit. Check Share all home directories (or don't, and add each one you do want to share manually). The remaining tabs in this dialog can be used to increase security, either for business use or perhaps to keep kids out of your files.

Figure 4. Setting Up Samba via the KDE Control Center

Click OK on the Shares page to return to the main Samba dialog, and click the Users tab. Select your Samba users from the list, and click Add. Set a password for each (and make sure you record them somewhere and give them to the relevant users) or don't -- it depends what you have on your PC and who is able to access it. Click OK to save your changes, and exit.

So far you have only "enabled" sharing. Now, to set up shares, you need either to log in as each user or better still run Konqueror as root. Press Alt-F2 to bring up the Run dialog. Enter , and click Options -> Run as different user. Choose root, and enter your password. In Konqueror, browse to /home and right-click on a folder. Go to Properties -> Share -> Configure File Sharing -> OK. Check Simple Sharing, and click Add. Browse to find the folder in question and click OK. Select Share with Samba. Under Samba Options, make the folder Writeable, and under More Samba Options, set Public, Browsable and Available. Return to /home, and click the Reload button. The folder should now have a hand symbol indicating that it is shared. Repeat this with other folders.

As a final tweak on the folders, again running Konqueror as root, go to a folder's Properties -> Permissions menu, and change them to Group and Others can Read and Write. Depending on who has access to your network, you might want to rethink these.

There is no reason why folders have to have people's names. You could just as easily set up and share Photos, Office or MP3.

Accessing the shared folders from a Windows PC is indistinguishable from accessing a normal Windows share. From another Kubuntu box, go to System Menu -> Remote Places -> Samba Shares. From any Linux box, run your file manager and enter the address of the server in the form

smb://your.workgroup
. To make life even easier, right-click on the KDE desktop, select Create New -> Link to Location, and enter the URL there, giving you an instant Network Neighborhood experience.

Remote Desktop Sharing (RDC)

If the server is to run without a keyboard and screen, and especially if it is to be hidden away somewhere, remote administration is very useful. And it's not the least bit difficult to set up. KDE has simple GUI tools for VNC. On the server, go to Network & Internet -> Desktop Sharing. Check Allow uninvited connections, Announce service on network and Allow uninvited connections to control desktop. But don't check Confirm Uninvited connections before connecting. It is good practice to set a password at this point.

Figure 5. Making a Connection

On the PC you want to use to access the server, run

krdc
(K Remote Desktop Connection), and enter the IP address of the server followed by :0 (zero, not O). Click Connect, and the remote desktop appears, giving you complete control of the server.

Figure 6. Choose the Connection Speed

To control the server from a Windows PC, download and install TightVNC (it's free from SourceForge). Run TightVNC Viewer, and enter the IP address of the server followed by :0.

Figure 7. KDE Control Center via VNC on Windows XP

Checking the Server

Run the server in an accessible location for as long as you can before you hide it away, and check that it can run without a keyboard and mouse. You may need to make changes in the BIOS to enable this. Make sure you can reboot it by remote control. It helps if you set an automatic login via System Administration -> Users & Groups -> Convenience.

Figure 8. Shutting Down the Server by Remote Control

About the Author

Phil Thane lives in Wales (UK), has been a teacher and worked for eight years on tech-support (Windows-based CAD/CAM systems for educational use). Phil started freelance writing 15 years ago and began using Linux about three years ago as a hobby. He is now a freelance writer/teacher/trainer.

And now you can sell things with open source, too. Introducing Magento

Posted by Matt Asay

Jack Aboutboul at Red Hat clued me into an interesting open-source ecommerce platform today. Called Magento, it's built by Varien and is "a feature-rich, professional open-source eCommerce solution offering merchants complete flexibility and control over the look, content, and functionality of their online store."

Put in English, Magento is an open-source solution for setting up and managing an online store. The product appears to be pretty robust already, but the roadmap looks even better.

If you need to set up an online store, why pay the six- to seven-figures to do so when you can use Magento for free and then pay when you want support going into production?

Those open sourcerors. Why won't they stay in the limited boxes/categories where the 20th-century proprietary vendors want them to remain? Marketcetera, OpenAds, etc. Darned pesky open source kids!

Annvix: A stable, secure, no-frills server distro

By Preston St. Pierre on January 21, 2008 (4:00:00 PM)

Annvix is a distribution aimed at providing a secure, stable, and fast base for servers. Be warned, however: Annvix is not for everyone.

When you boot the Annvix netinstall CD, you're greeted with a shell and informed that the root password is "root" and should be changed. It also advises that you set up your network and use lynx on another terminal to browse the documentation for the install. Already I could tell that this was not going to be your average user-friendly GUI installer.

Before setting up the network, however, I tried to switch my keyboard to Dvorak by typing the command loadkeys dvorak as usual. This did not work. I assumed that Dvorak wasn't included and continued, setting up my network and reading the documention. It assured me that Dvorak was installed, and after looking in the appropriate directory I found that to be true. I had to give loadkeys the full path for it to work, which is not the behavior I'm used to, but I guess that's why its one of the first things mentioned in the documentation.

After setting up my partitions with fdisk and adding the swap partition manually as instructed by the docs, I mounted the soon-to-be-Annvix partition and executed the install-pkgs command that the netinstaller uses to copy packages over. Despite my having done as the documentation told me and manually setting a new root password, the installer prompted me to change it again. It then copied all the packages over fairly quickly while I read a bit about Annvix on its Web site. The front end to the package manager is clearly APT, but the back end used is RPM. The developers feel that APT offers a more usable interface than yum, and that RPM is a good package manager.

When the copy was complete I rebooted. The initial boot was so surprisingly fast that I rebooted to time it. It took 17 seconds from the bootloader to a login prompt, including just over five seconds of waiting on DHCP, which could be avoided. This is much faster than any other vanilla install I've booted on my AMD Sempron 2800 with 512MB of memory. Certainly there didn't seem to be much bloat in Annvix.

When I logged in and ran apt-get update && apt-get dist-upgrade to pull down the latest code, it ran smoothly, and repeating it caused a kernel upgrade as well. I rebooted to the new kernel and everything ran properly except the loadkeys dvorak comand, which now worked without the full path but only if I added the extension to the file -- again, nonstandard behavior.

I tried to install nano, my preferred text editor for quick updates, only to find it wasn't in the repository. As a matter of fact, when I looked around, I found there were a lot of things missing from the repository, the most notable of which is probably X11/xorg. There are a few libraries referencing x11 but nothing complete, and no xorg packages. I could also find no window managers, which only reinforced my belief that the packages referencing x11 were ghost packages. Clearly the Annvix developers are keen on cutting unnecessary bloat.

There were, however, many important server packages available. While perhaps not containing the widest variety of each type of server, they cover a large range of requirements with Apache2, MySQL, PostgreSQL, NFS utils, Samba, OpenLDAP, OpenNTPD, SpamAssassin, Subversion, Pure-FTPD, Exim, BIND, Dovecot and OpenSSH. Also notable were gcc and all the related libraries, Perl, and Python. A package for Apache+Perl and one for Apache+PHP were both available, so I installed the Apache+PHP package as well as MySQL. I set up the users for MySQL, then attempted to run a test page. While PHP had been installed, Apache had not been automatically configured to use it. After all the manual configuration Annvix had required so far this didn't really surprise me. MySQL proved to work properly without any interference, and after I set up Apache, everything I required from my server seemed to work.

It was certainly as bare-bones as it could get. But was it secure?

Two important packages in the repository are Snort, a network intrusion detection system (IDS), and Aide, a host-based IDS made to replace Tripwire. A network IDS monitors network traffic for known attack patterns and possible security concerns. A host-based IDS monitors essential system files, such as the password and shadow files, to see if they have been modified. When they are modified in ways that don't meet the security policy (for example, a new user being added may be OK, but the root user's password changing may be flagged) the software contacts the system administrator. These are both essential tools, along with regular updates, involved in keeping a server secure. I installed both of them, then used Nikto and Nmap on a separate system to scan my Annvix server. Snort picked up on the regular scans as expected, but it surprisingly also picked up on and properly identified the scans which were specifically designed to evade detection systems. This, coupled with the fact that Nessus picked up no viable vulnerabilities while also being detected by Snort, gave me fair evidence that the Annvix install was relatively secure.

All in all, Annvix proved to be almost exactly what it advertised -- a stable, secure, server-oriented distribution providing a base platform for whoever needs it. It is very well documented and reliant on an administrator for configuration instead of scripts. If you have been wanting to get into the nitty gritty of GNU/Linux for some time and didn't know where to start, Annvix is a great base until you're ready to build your own distribution from scratch. It will force you to learn by doing, and it guides you each step of the way. Anyone looking for a server distribution might find Annvix a viable alternative to a solid but out-of-date Debian base. Either way, Annvix is worth looking into.

Preston St. Pierre is a computer information systems student at the University of the Fraser Valley in British Columbia, Canada.

Dell Ships New PowerEdge Blade Servers

Agam Shah, IDG News Service

Monday, January 21, 2008 5:00 AM PST

Dell will today add a new series of blade products to its PowerEdge server line, expanding its presence in a market dominated by rivals IBM and Hewlett-Packard.

The PowerEdge M-Series of blades includes the fastest-performing and most power-efficient blade servers the company has, said Mike Roberts, senior product planning manager for Dell.

The PowerEdge M1000E, a 10U enclosure, will support the new Intel-based PowerEdge M600 and Advanced Micro Devices-based PowerEdge M605 blade servers, also announced Monday.

The M1000E enclosure supports a range of network connectivity options, including modules for Ethernet, Fibre Channel and InfiniBand connectivity. The enclosure allows customers to upgrade or stack up on network hardware to boost networking speed.

The PowerEdge M600 blade server is a dual-socket server that supports up to two quad-core Intel Xeon processors, including processors in the Xeon 5400 series running at up to 3.16GHz. The dual-socket PowerEdge M605 servers support dual-core Opteron 2000 series processors running at up to 3GHz. Both blades support Windows Server 2003 and Linux OSes.

Targeted at data centers, the PowerEdge M1000E enclosure is priced at US$5,999, and the blades start at $1,849. The products are now available worldwide.

Dell's OpenManage systems management technology, which will be bundled with the blades, includes energy management tools. Capabilities include real-time power reporting and the ability to set power usage by blade.

Power efficiency in blade servers is an important consideration for those looking to upgrade data centers, said Richard Doherty, cofounder and director of Envisioneering Group. Energy costs have become a big factor in considering hardware for data centers, and companies are taking a closer look at reducing their carbon footprints, Doherty said.

"Going greener can be a reason for an upgrade," Doherty said.

The new blade server gives Dell an opportunity to catch up with HP and IBM blade products, especially in small data centers, Doherty said. Dell's PowerEdge M-Series will compete with IBM's BladeCenter H and HP's BladeSystem c-Class blades.

In addition, the new blade servers will need strong management tools in order to succeed, Doherty said. Dell in the past has announced service and support initiatives that haven't panned out, and the company's OpenManage system management tools are not as strong as autonomic computing offerings from HP and IBM, Doherty said.

System management is a big concern for data centers, and customers are looking for the ability to manage systems without the need for additional IT engineers, Doherty said.

Sunday, January 20, 2008

LinkXL plugin aims to monetize WordPress blogs

By Tina Gasperson on January 21, 2008 (9:00:00 AM)

LinkXL is a new way to capitalize on your blog's popularity. It leverages the keywords and keyphrases you've been including in your content in an effort to get a higher page rank on search engines. LinkXL is an ad broker, but the ads are not really ads, they're just links from certain words in your blog posts to an advertiser's site. Advertisers pay a set amount to get a linked keyword -- usually around $5 per link, per month. Publishers stand to make a lot of money, LinkXL executives assert, because of the sheer volume of content available on most blogs.

 LinkXL founder John Lessnau emailed and asked me to try out LinkXL. Fresh from the Las Vegas Pub Con, Lessnau was pumped about the potential for LinkXL on blogging sites like mine at gasperson.com. Lessnau did not ask me to write about LinkXL, but I thought it was unique enough to share with Linux.com readers.

LinkXL can work on just about any site with access to its server, but it is especially easy to set up on non-WordPress.com-hosted WordPress sites. That's because LinkXL wrote a plugin for WordPress. Installed LinkXL is simply a matter of uploading the plugin to your server and clicking on "activate" in the WordPress plugin admin area. The software and service are free for publishers, including WordPress site owners.

Once the plugin is installed, the LinkXL spider begins indexing the pages on which you want to sell keywords. As the publisher, you set your own price, unlike with other ad brokers, though LinkXL suggests a price of $5 per link per month. LinkXL takes a 40% cut for handling the technical support, marketing, and billing and payment system.

When an advertiser buys a keyword or phrase and sets the number of links desired, the LinkXL plugin automates the process of inserting the link; you the publisher don't have to do anything except keep the site running and collect the monthly payment, which LinkXL can send as a check or a PayPal payment. If you don't approve of the advertiser's content, you can cancel the ad at any time, though you won't receive payment at all for the month in which the ad is cancelled. (Hint: cancel ads early in the month to avoid giving too much free advertising.)

Purchased keywords or phrases show up as hyperlinked text. Lessnau says this is an effective way for advertisers to ensure their page rank goes up at search engines like Google. For one thing, the links not only look like plain HTML, that's what they are. "Unlike other sites that do JavaScript in context links that do not help your link popularity at all, LinkXL creates HTML text links that the search engines follow and count in their ranking algorithms," Lessnau wrote at his blog. "Many SEOs believe Google already has technology in place that helps them spot text link ads so they don't count them in their search engine ranking algorithm. After all, how hard is it to spot a block of text links in the sidebar or footer of a Web site?"

I haven't sold anything through LinkXL; apparently keyword contextual links aren't flying off the shelf just yet. Lessnau thinks it's just a matter of time and word of mouth. "I have started to notice more and more requests for links in content," he says. "So I think our message is starting to get out."

Every Monday we highlight a different extension, plugin, or add-on. Write an article of less than 1,000 words telling us about one that you use and how it makes your work easier, along with tips for getting the most out of it. If we publish it, we'll pay you $100. (Send us a query first to be sure we haven't already published a story on your chosen topic recently or have one in hand.)

Tina Gasperson writes about business and technology for some of the most respected publications in the industry. She's been freelancing since 1998.

Vendors challenged to justify $Kiwi pricing

The strength of the New Zealand dollar has tech buyers crying foul


By Ulrika Hedquist and Randal Jackson Auckland | Monday, 21 January, 2008

ICT buyers are crying foul over local vendor pricing, saying the differential with US pricing is out of hand as the kiwi dollar surges in value. When looking to buy Office 2008 for Mac for his personal computer, local IT manager John Holley was shocked to find that the New Zealand pricing for the software converted into nearly twice the US price.

Initially he thought that this was limited to Office for Mac, but then he noticed that the price difference was the same for Office 2007.

“The pricing for Office 2008 in New Zealand is extortionate,” says Holley. “I am much better off buying [the software] off Amazon. That way I can save several hundred dollars even after paying air freight.”

“Is this how Microsoft helps New Zealand’s knowledge economy — by making us pay more than US customers?” he says. “It would be interesting to hear how Microsoft can justify such a significant price uplift.”

Office 2008 for Mac costs NZ$899, compared to the US price-tag of US$399.95. At a conservative exchange rate of 75 cents the US price translates into NZ$533, which means a cost differential of NZ$366 or 41%, says Holley.

Office 2008 for Mac Special Media Edition costs NZ$1,149, compared to US$499.95. The difference translates into NZ$482 or 42%, he says. Even the student edition has a 26% increase compared to what students would pay for it in the US, he says.

Even when taking into consideration that US prices are often shown excluding sales tax, and New Zealand prices often include GST, the difference in price is significant.

Holley sent an email to Microsoft’s online customer service but the reply only stated that Microsoft’s prices vary by region and are determined based on factors such as exchange rate, local taxes, local market conditions and retailer pricing decisions.

When contacted by Computerworld, a Microsoft New Zealand spokesman said the company had no further comment.

Computerworld had a look at the pricing of some other software products. The exchange rate was 77 cents at the time. The US price is excluding sales tax, which varies widely from state to state, and the New Zealand retail price includes GST.

And Microsoft is not the only vendor apparently charging a premium to Kiwis:

US priceUS$ price converted to $KiwiNZ retail price
Microsoft Vista Business US$299NZ$388 NZ$729
Microsoft Office Professional US$469 NZ$608 NZ$1149
Adobe Photoshop CS3 StandardUS$649 NZ$842 NZ$1337
Norton Internet Security Suite 2008$59.99 NZ$78 NZ$99.99

Holley is not alone in his concern and hardware costs could also be affected.

Former NZ Post chief information officer and ICT consultant Tony Hood also raised the issue with Computerworld. He was endeavouring to buy two IBM P series systems for a client but found that the price, given the favourable exchange rate, was prohibitive. Hood says the local prices would be more appropriate if the exchange rate was 50 cents New Zealand to the US dollar.

A 50-cent relativity is exactly the internal exchange rate IBM uses for measurement purposes — for comparing year-on-year performance. But that is a coincidence, a spokeswoman says.

“This internal exchange rate does not have any bearing on IBM’s pricing in New Zealand,” she says. “It is for measurement purposes only.”

She says IBM’s pricing model is different in every country, based on local market conditions and other factors, including things like freight costs, importation costs, official exchange rates, duty and tax. Another component is distributor and reseller margins.

“Because of the myriad factors that influence local prices, it is virtually impossible to make meaningful pricing comparisons between the US and other countries, or between any countries, for that matter,” she says.

Buyouts, network overloads among holiday happenings

A round-up of local ICT news over the Christmas-New Year break


By Rob O'Neill Auckland | Monday, 21 January, 2008

SmartPay buys-in to wi-fi

NZX-listed payments company SmartPay announced just before Christmas that it would acquire wi-fi provider FIVO, a subsidiary of the National Communications Corporation.

FIVO operates wireless hotspots at hotels, motels and retail chains, including the Robert Harris chain.

SmartPay says the company is a good fit with its merchant services offerings and will allow it to “fulfil rapidly increasing retailer and customer demand for access to wi-fi hot spots”.

Spectrum auction results
The Ministry of Economic Development has announced the results of its 2.3GHz and 2.5GHz spectrum sale — both are suitable for WiMax services. The six “provisionally” successful bidders paid $4,374,333 for the spectrum, while two lots were set aside for Maori use and as a managed spectrum park.

The biggest buyer was Canadian provider Craig Wireless, which bought two lots for just over $1 million.

The other buyers included Telecom, Vodafone, Kordia, Woosh and Blue Reach, a subsidiary of CallPlus. The rights will transfer in December 2010 or earlier, by arrangement with existing rights-holders.

ComCom closes Telecom investigation
The Commerce Commission has closed its investigation into allegations that Telecom had squeezed its competitors’ margins through bundling and price discounting, affecting their ability to compete.

The investigation dated back to a 2004 deal which saw Telecom introduce a $10 discount for customers who bought all of a bundle of services comprising home-line, toll calls and broadband internet services.

The Commission concluded the “bundle” did not breach the Commerce Act because “efficient competitors could earn positive margins over the combined calling and broadband offering when they sold similar bundles.”


Robinson joins carbon-trading body
Former Microsoft New Zealand managing director Helen Robinson has joined TZ1, New Zealand’s carbon-trading market.

The market was launched late last year, by the New Zealand stock exchange (NZX).

According to an NZX media release, Robinson’s role will be in the voluntary carbon-trading market, where she will be “building relationships and developing the brand strategy of TZ1.”

She will report to TZ1 chief executive Mark Franklin.

“I strongly believe that the TZ1 market can forge a position as a global leader in this evolving space,” Franklin said in the release.

“TZ1 is positioned to be the only straight-through, transparent and regulated market for carbon credits in the Asia-Pacific region, and one of only a handful around the world.”

Robinson left Microsoft in September, after heading the company in New Zealand for two years.


Vodafone stumbles
Vodafone Australia and New Zealand are reviewing their network capacity after roaming customers lost service and had their international text messages seriously delayed over Christmas because of network congestion.

Vodafone Australia conceded it had insufficiently forecast the volume of text messages that would be sent over the holiday period, leaving the network beset by delays.

“We recognise it was an ongoing issue for [between] seven and eight days,” a Vodafone spokesman said.

Vodafone New Zealand said the matter was “out of our hands”, drawing a sharp response from TUANZ chief Ernie Newman.

“For Vodafone to describe the collapse of its trans-Tasman international roaming service as ‘totally out of our hands’ is unconvincing, unprofessional and unacceptable,” he said.


Security sell-out
Israeli internet company Allot has bought New Zealand network security developer Esphion. Allot paid US$3.5 million up-front, plus an additional US$2 million if milestones are met (totalling NZ$7.2 million) in the early January buy.

Esphion was given NZ$400,000 by Technology New Zealand in 2004 to develop software to detect and stop computer worms, NZPA reported.

Shortly after the Esphion announcement, Auckland-based security consultancy Security-Assessment.com was bought by Singaporean company Datacraft for NZ$5 million.

Argent sale

Argent Networks is selling its assets to US communications-software company Redknee Solutions for $5 million, with another $5 million to be paid if targets are met, reports the Dominion Post.

The fire-sale is subject to shareholder approval and is unlikely to produce any payment to ordinary shareholders in the company.

Argent has been involved in court action with shareholders over changes to its constitution.

E-tales: iPhones in the wild

Thousands of iPhones on Vodafone’s network in New Zealand, says source


By Jo Bennett Auckland & Wellington | Monday, 21 January, 2008

Our Helen’s a Colossus fan
Our Prime Minister must be a secret techie. Helen Clark apparently spent part of her hols taking a look at the rebuilt British Colossus computer at Bletchley Park. Computerworld has been corresponding with the venerable establishment which, with the aid of the Colossus, helped win World War II by decoding German signals. Parts of the Colossus have been on display at MOTAT in Auckland since August, in the museum’s “Machines that Count” exhibition.


IPhones in the wild
One of our e-talers hangs out with a super-geeky crowd, eight out of 10 of whom own an iPhone. Just before Christmas, said e-taler was pleasurably wiling the time away in a Newton, Auckland, bar with five of said friends — of which four are proud iPhone owners — when they spotted the first “iPhone in the wild”, as they put it. Mucho excitement ensued, with the four iPhone owners closing in on their new best friend, flashing their phones.
An hour or so later, the geeky team was amazed to see yet another iPhone in the wild, in a restaurant in the same area. Seems like Newton is where it all happens. But don’t be surprised if you spot an iPhone on the street one of these days. A source told our e-taler that there are currently 3,000 iPhones on Vodafone’s network in New Zealand.



Not so sweet-smelling
“Revenue is the only deodorant” — The New Zealand country manager of a multinational company defines how he is measured.



Post-Christmas war stories
We know, we’re well into January, but E-tales couldn’t resist a holiday offering which perhaps our most convivial e-taler heard over a few January beers:
A Wellington IT exec’s partner emerged from the shower naked and then, momentarily nonplussing her man, suddenly fled to the bedroom. Looking up, the exec saw a man in a uniform in the window. Yes, the Salvation Army was collecting for Christmas. After handing over some small change, he was entertained by the Sally band breaking into an impromptu version of Rudolph the Red-nosed Reindeer.



The anti-waffle clause
Inviting tenders for a system to manage ICT during national disasters, the Ministry of Foreign Affairs and Trade specifies that responses should “contain short, content-rich, specific answers to the questions posed”.
Content-rich? Why wouldn’t a document be content-rich? Indeed, there’s little else in most documents but content. But, having read a good few windy tenders, our e-taler reckons he knows what the ministry is saying: be brief and to the point.



PAL on the job
The Great Unwashed — that’s us, the public — can at long last access PAL (Public Access to Legislation), through a revamped website.
Attorney-General Michael Cullen formally announced the opening of the site last Wednesday. Knowing Computerworld’s deadlines, the Parliamentary Counsel Office sent us an early draft copy of Cullen’s media release, accompanied by a warning that it was intended just as a “heads-up”.
“Please note this is just a draft and not suitable for use,” the covering note said.
How right they were. The release included a supposed hypertext link to the site, at www.legislation.govt.nz. In fact, the link takes one to the “government jobs online” website, at www.jobs.govt.nz
Maybe now the much-delayed project is finishing-up, those involved are looking for new jobs and got their URLs mixed-up.



Hubris loses Top Gear host £500
That’s a whopping great $1,239 in South Pacific pesos.
Jeremy Clarkson, host of the BBC’s Top Gear programme, who is not known for his moderate views, surpassed himself recently when he posted his banking details in a British newspaper, in a foolish attempt to prove that identity fraud was not such a big issue. He got pinged, to the tune of £500, which the anonymous hacker — who remains unknown — transferred Clarkson’s dosh to the charity Diabetes UK.
This despite Clarkson banking with Barclays Bank whose security is top-notch. Clarkson had poohed-poohed concerns over the massive UK data loss last October, which saw 25 million Brits’ personal details exposed when two discs containing child benefit claims went missing.
“All you’ll be able to do with them is put money into my account. Not take it out,” opined Clarkson, who called it all “a palaver over nothing”.
Nice one, Jeremy.

The burning IT issues of 2008

Frank Hayes gives his picks


By Frank Hayes Framingham | Monday, 21 January, 2008

Ready for 2008? Budgets may tighten up, but IT's challenges will just keep growing: security problems, virtualisation technology, legal issues, users who can't be stopped and the baby-boomer brain drain. Here are the major issues to watch out for in the coming year:

1. The economy A few months ago, in Computerworld US's latest Vital Signs survey, 47% of CIOs polled said they expected their IT budgets to rise; 12.5% was the average predicted rate of increase. But the bill is coming due for shaky mortgages, the dollar keeps dropping, and a business slowdown looks inevitable. Don't slash your budget plans yet, though. Ask how your CEO plans to respond, then map out how IT can help. Cutting costs is one thing, but if your company snaps up a few acquisitions, you'll need more IT budget, not less. First, you need to know the plan. Find out.

2. Virtualisation Ignore how vendors sling this buzzword around. Instead, look at virtualisation — of servers, desktops or storage — in terms of how it lets you respond faster to changes in what users need. That's where business advantage comes from, but it won't come easily, so get started. By 2010, when users need results, you'll be able to deliver them while the business opportunity is still hot.

3. Plain text is dead That's your new mantra for data security. No valuable company information should go unencrypted across a wire, onto a disk or into a backup. Encryption is the ultimate defence against everything from hackers to users with USB flash drives. We've now got the CPU horsepower and the crypto technology. This year, start using it.

4. Consumer tech You can't keep this stuff out of the office, so stop pretending you can. Users want iPhones? Give them the webmail and applications they need. They want to use webcams or Second Life for meetings? Track what they're doing, watch for security holes, and close them. Don't say "no", say "here's how" — or challenge users to suggest how to make their gadgets business-safe. They may surprise you.

5. Desktop Linux Not this year —the functionality is now there, and so are applications and user-friendliness. But inertia is still Windows' friend. Retraining users with a billion worker-years of Windows experience is Linux's next big hurdle.

6. Patents And not just Microsoft's sabre-rattling at Linux, or the endless patent lawsuits against IT and wireless vendors. Patent holders are now trying to control whether customers can resell equipment, who can repair it and what it can be connected to. In 2008, the US Supreme Court will rule on those questions, which affect everything in IT from whether toner cartridges can be refilled to how much we can mix and match technologies. Stay tuned.

7. Retiring baby boomers With your baby boomer IT staffers (born 1946-1964) ready to retire, you could lose lots of critical knowledge about your business IT — right? Well, maybe. But plenty of those aging careerists sledding toward retirement just represent lots of inertia and resistance to change. Start identifying specific older IT experts worth keeping. For the rest — well, isn't it time for the next generation to step up to the challenge?

It's still all about business. Either we're technology plumbers or we're business enablers. Plumbers will get downsized, outsourced and offshored. Enablers will be critical members of the business team. That's a brutal split, but it's the IT world of 2008. Which way do you want to go?

lies and statistics.

Secunia have reported that more flaws were found in Redhat Linux (633) than in Windows (123), but even a blind man can see it is nowhere near a fair comparison.

Redhat is made up of the core operating system, and thousands of third party applications that people can choose to install. (or not). 99% of the 633 security flaws found in Redhat Linux were in the third party applications, only 1% were in the core OS.

Windows however, only had 123 bugs, but 96% of them were in the core operating system. Since 3rd party apps are not supplied or supported by Microsoft however, all of their bugs did not get added to the total as they did in Redhat’s case.

Does anyone else thing that this is not a fair comparison? I can tell you one thing, I’d rather have a core OS with 1% of 633 flaws (6.33), than one with 96% of 123 flaws 118.08. The OS results could just have easily been put “Windows had 118.08 more OS security flaws than Redhat Linux.”

With regards to Firefox, they also seem to be counting flaws that Mozilla have found themselves. We know they are not doing the same for IE, because Microsoft don’t announce flaws they find themselves. Again, not really a fair comparison.

Interesting however, is the patching statistics for IE and Firefox.

Out of eight zero-day bugs reported for Firefox in 2007, five have been patched, three of those in just over a week. Out of 10 zero-day IE bugs, only three were patched and the shortest patch time was 85 days.

(taken from here)

Microsoft’s best patch result was 85 days to release and only 3 out of 10 flaws patched, verses 5 out of 8 and just over a week for Firefox.

Statistics are all good and interesting, but taken in the wrong light, can paint a picture that is dangerously incorrect.

The Free Software hardliner, the corporation, and the shotgun wedding

By Josef Assad on January 19, 2008 (2:00:00 PM)

We called it Free Software at first. It wasn't until we started calling it Open Source that the punditry line counts began creeping up higher than the code line counts. We had this baby and we were proud of it, and the deep rooted insecurity born of being the ridiculed and utterly misunderstood underdogs made us require the approval of business and Grandma Bessie before we could ourselves be satisfied.

Well, now we've got it, and in some ways Open Source is not better off because of it.

Thanks to the cavorting evangelists in the Ubuntu community who were converted on the strength of a cheaply-gotten sense of technical superiority over their peers who still use Windows 98, "Free" is now a dirty word since it stands so often in the way of converting more people, faster.

Free Software wasn't originally meant to be a cult with a membership statistics monkey on its back; the idea was, like, to be "Free."

And what happened when we celebrated the business adoption of Open Source? Many great things happened (just look at LKML), but something bad has also happened. We started thinking that we'd have to get around the notions of liberty and sovereignty to make our baby pretty for the Big Business IT Fashion Show.

We showed a willingness to sweep our sense of liberty under the carpet, and in return we got Sun's OpenOffice.org and Linux versions of Skype. We got binary blobs for video drivers -- crystal-clear memos from the video chipset companies that said they were so disinterested in Open Source that they didn't want to benefit from the most trivial advantage of Free Software: development cost savings.

When we nagged, it was for an updated Linux Flash plugin. We didn't lobby for a Free one.

I don't want to malign the term "Open Source," but rather use it as a portmanteau for what we have today versus what we built before. Open Source it is about the software, while Free Software is about people. And you can have good -- and open -- technology if you have a strong community, but you can't necessarily grow a healthy community if you have open technology.

Where did we go wrong?

Wasn't the magic of licensing going to preserve all that we held dear? The GPL was more than just a license; it embodied a philosophy, and it contained a statement of intent: "This is our software," it said, "and you're welcome to come in and play. Here are the rules which guarantee that other people can also come in and play."

The four basic freedoms of Open Source -- the right to view, modify, redistribute, and to use for any purpose -- go a long way toward inclusiveness. The idea of inclusiveness is to create an environment conducive to establishing and preserving a community.

And then something happened, and we got Open Source projects with characteristics which were to community what North Korea is to democratic government. They had corporate stewards who sat at their front doors, checking everyone going in; Sun Microsystems with its office suite being a canonical example, Canonical with its Launchpad system being a shining other. They had corporate sponsors who used the developer community as free (skilled) labor or, worse, as a testbed: "Let the rabble use it and let's see what breaks."

The Free Software community isn't there to build revenue or to have development models built around it. That wasn't the original idea. We wanted to control our own technology. Acme Solutions tries to control the development community around its system, and when this doesn't work, what does it do? It throws the HR department at the problem and ends up with an internal group of developers, which hardly qualifies by any standard as a community. And the point here is: "If there isn't a community, is this what we had in mind when we started out with Free Software?"

There are some projects out there which have changed the face of Free Software: the Netscape codebase certainly has, as has the StarOffice suite. In both cases though, the codebase is so broad, complex, and mired in "let's build a suite, we can't miss any features" development philosophy that these systems are not very approachable to begin with, which is why the larger projects are worked on mainly by people who are paid to do so. To each their own, of course, but we had our own way of doing things and it wasn't by building "suites" or leviathans of clumped capability.

The classical Unix concept of small and simple components in a conceptual tool chain, held together by stark and self-documenting interfaces, probably wouldn't have given us OpenOffice.org Calc. Or Firefox. Or Evolution. But what is interesting about these prime examples of Free Software with corporate-monolith-suite-itis is this: they end up sporting plugin systems. To be really honest, a plugin system is actually at a very high level a scream for help, a way of saying, "This codebase is too beastly! We need to export a simplified interface for developers to be able to contribute more easily!" Plugin interfaces are admissions of guilt. They are unconscious confessions that it was a mistake to discard the tool chain architecture.

What happened to Free Software when it courted the corporation -- apart from changing its name -- was that it bent over backwards to accommodate corporate IT characteristics. A business is more likely to develop ERP software than to develop a small application such as sed. Corporate-driven monster applications are less approachable, and therefore less likely to attract by any organic means a developer base, than simpler applications. "Small is beautiful" just isn't in the corporate DNA, at least without some sort of revolution.

We don't get very much "release early, release often" these days either, at least not as much as we used to. "We'll release it when it's ready" is now dirty: the more professional communities have release schedules. Eh, "professional"? As opposed to "amateur"? If the last 10 years of geekery and the Internet have taught us anything, it is that amateur culture is not another way of saying "this sucks." Remember, GNU didn't start with an IBM grant, and neither did the Linux kernel. And they came from somewhere.

The Free Software modus operandum and free culture are not antithetical to the corporate model, but I don't think the two have figured out how to mesh yet. That guy sitting in his mom's basement, getting a CRT tan and letting his beard grow, who wrote your IDE driver, and the dark-suited dude with the pointy hair and the Mont Blanc pen are joined at the hip -- but they could both be enjoying it a bit more.

Novell's people look like they have a pretty clear idea of what they are doing with their corporate Linux systems, but when it comes to openSUSE and the associated community, there's a certain sense of disconnect -- which is probably innocent; neither side has worked out how to speak to or make use of the other, and distro releases are like difficult births. The openSUSE/SLE model actually looks like a me-too of the Red Hat/Fedora divide: "Oh, look at what Red Hat is doing! Let's do it, too!"

IBM products that get open-sourced don't usually don't attract a traditional developer community. Sun Microsystems wants Free Software that, paradoxically, it can control. Is the Firebird RDBMS really any more accessible now that it is open? When an organization opens a product, it typically continues to invest in development instead of pouring effort into creating a genuine, sustainable organic community structure. Open Source becomes a marketing sound-bite, like an ISO Certified label on a can of sun-dried tomatoes. ISO certification used to mean something, but time dilutes.

If the idea of Open Source is to make IT cheaper for the bean counters, we've succeeded. Those who believe Free Software is supposed to change the way people think about ideas and information, not just save money, often feel frustrated about how their creation has been co-opted by people who don't share their desire to be "Free."

Political scientists learned the hard way that introducing democracy before improving education was a recipe for trouble, and I think one lesson we can take away in our field is similar. Our code is inextricably ideological. Free Software is a choice, an option, and also a movement. If we don't educate our users about the ideology behind Free Software, we not only cheapen Free Software, but lose everything that made it special in the first place.

Josef Assad got started with Free Software by co-founding the Egyptian Linux User Group. He helped to start up one of the first free software ICT4D companies in the Middle East, and has worked with the Grameen Foundation on open source microfinance information systems. Putting his money where his free culture mouth is, he released his first novel under a Creative Commons license.