Tuesday, October 14, 2008

Winners and Losers of Free Wi-Fi

By Rick Aristotle Munarriz
October 14, 2008

Free nationwide Wi-Fi is one step closer to reality, now that FCC tests show that a chunk of available spectrum can support a countrywide network without interfering with rival spectrumholders.

Don't ditch your access provider just yet. This is, again, just one step in what will likely be a very long and winding road toward skinflint broadband.

The auction itself is still at least several months away. The eventual winning bidder will also have plenty of time to roll out the free product. The government is requiring that at least 50% of the nation be covered within four years, and 95% of the country within 10 years.

And lest you get all excited by thoughts of all-you-can-surf wireless, know that this isn't a charitable initiative. At least one likely bidder -- startup M2Z -- is looking at an ad-supported model, with paying subscribers receiving faster access. In other words, the free product is unlikely to satisfy cyberspace speedsters. The FCC will also make sure that free access is filtered; file-swappers and fans of X-rated sites will likely have to look elsewhere.

Still, canvassing the country with free connectivity will be a game-changer, full of opportunities and challenges alike. Let's look at both sides of the story.

The losers
The telcos and cable companies providing Web access may get hurt the most, especially among their entry-level pricing plans. If cheap, slow connectivity is what those consumers want, a free ad-supported model will hit the hot spot.

Most access providers are gargantuan companies with diversified product lines, but some are pure ISPs like Earthlink (Nasdaq: ELNK), and to a lesser extent, United Online (Nasdaq: UNTD). United Online has actually been moving away from its dependency on its Juno and NetZero dial-up offerings, acquiring properties like MyPoint, Classmates.com, and most recently FTD. Earthlink has not.

Wireless carriers should also feel the sting. Pitching subsidized handsets with expensive data plans won't be an easy sell if the public turns to Web chat alternatives.

Premium entertainment providers may get pinched, too. Sirius XM Radio (Nasdaq: SIRI) could face an uphill battle against free, universally accessible Internet radio. Local network affiliates, and even the cable giants, will suffer if couch potatoes begin to stream on their own terms.

In short, if you provide a premium service that has a reasonable Web-delivered alternative, free Wi-Fi is not your friend.

Another unlikely loser is Microsoft (Nasdaq: MSFT). Free users are likely to turn to cheaper Linux-flavored operating systems, bypassing premium productivity software like Microsoft Office in favor of free Web-stores apps like Google Docs. Cloud computing is coming anyway, but free nationwide Wi-Fi will make it even more pervasive.

The winners
Free Wi-Fi's aim is to provide deeper market penetration. That will naturally benefit e-commerce and online advertising companies, but investors need to be realistic. Folks who flock to free connectivity won't be voracious shoppers at Amazon.com (Nasdaq: AMZN) or desirable leads for sponsors on Google (Nasdaq: GOOG).

Still, free Wi-Fi should be a "net" positive for both companies. If you're shopping at the mall and you can hit Amazon for a little comparison-shopping -- many people do so already, but more will follow in a free Wi-Fi future -- Amazon will be treated to incremental sales.

Google will also be a winner, especially if the top bidder turns to this online ad king to monetize its landing page.

Another wave of profit may come from display advertising, an area where Yahoo! (Nasdaq: YHOO) excels. Yahoo! knows that it can't compete with Google on the paid-search side, but it knows how to milk the most out of less lucrative brand-enlightening display ads.

The scorecard
Dump the losers? Back up the truck on the winners? Not so fast. We may still be several years away from a reality of free countrywide Wi-Fi. That's a lot of time for the companies that now look like losers to arm themselves with the right tools to stay relevant.

Sirius XM Radio already has a Web-streaming product; an ad-supported freebie could help it compete with free Internet radio and serve as a gateway drug for the premium product. Phone and broadcasting titans have too much at stake to go down quietly, so expect them to map out ways to thrive in a changing environment.

The road is long, but it's never too early to study the map.

Wednesday, October 1, 2008

Avoiding the 5 Most Common Mistakes in Using Blogs with Students

By Ruth Reynard

I've used blogs in my classes for five years with university graduate students. I've found them to be extremely helpful in certain circumstances but only when there is clarity for students in their use. Students who object to the inclusion of blogs in a course are usually objecting to what they perceive will be just one more task on top of a myriad of others or simply some busy work that will not benefit their learning. Older students can also reject the notion of "publication" that is inherent with blogging. Each of these objections can be addressed by an effective and innovative instructor by careful planning and skillful management. There are, however, several common mistakes that should be avoided when using blogs in instruction. I have made all of these mistakes and have learned how to address each one proactively.

1. Ineffective Contextualization
As with any instructional tool or learning support, without a clear context within which the tool is to be used, students will not understand the benefit to their learning and will, ultimately, reject the use of the tool. In order to effectively contextualize the use of an instructional tool, instructors must think carefully exactly where the tool will be used in the flow of the course, how often the tool will or might be used, and how necessary the tool is to the learning process. In the case of blogging, the most effective use of this tool is in the area of self reflection or thought processing. As such, there must be concepts for students to think through, various resources and content segments to process, or ideas to construct. To simply ask students to blog without this level of planning will lead to frustration for the students. In other words, there must be a certain amount of content preparation already covered or made accessible for students before blogging will really support the learning process. While a blog can also provide social placement of students or academic placement of students within a group, blogs are fundamentally individual in their purpose and essence. That is, while comments can be added or ideas posted following a blog entry, these sit outside the initial posting--blogs are not wikis or online discussion forums, therefore, if individual self-reflection is the central benefit to the learning process, instructors must plan carefully as to when in the course self-reflection will enhance the learning process for each student. Please note: there are additional benefits that instructors can glean from blogs in terms of helping access student voice and understanding student progress in their idea or concept construction, but the instructional use of the blog tool is mostly about the individual benefit to students first in deciding when and how to use blogs in instruction.

2. Unclear Learning Outcomes
Following on from designing the placing of blog use based on the instructional flow, is the notion of designing blog use based on learning outcomes.

Learning outcomes are much more than course objectives. Learning outcomes begin with course objectives; however, include student learning needs and objectives, and future application of the learning. Therefore, understanding of the global nature of the learning outcomes of a course in crucial to good planning and use of learning resources and tools. Choosing the blog tool in a course would mean that the transferable skills of critical thinking, thought processing and knowledge construction would be well supported and recorded. If the instructor is unclear as to what the learning outcomes of the course are and is focused only on course objectives, the potential of the blog tool may not be maximized. The following are several ways in which the use of blogs in instruction can develop new higher level thinking skills:

Analysis: A blog can help students process their thoughts and ideas for analysis. There is no better way to begin to see the importance of analysis as when there is a goal of articulating your thoughts for explanation to others. That is, if two ideas are presented together in support of one concept, self-reflective students must learn to a) distinguish the ideas, b) understand the differences between and similarities between, c) understand where the connection points are if any, d) decide, based on analysis, which one (if any) they will include and build upon in their own learning process. This is a highly constructive process and the skills needed must be intentionally encouraged and can be visibly recorded in a blog.

Synthesis: As part of the analysis, it is important that students can synthesis the original ideas and the new ideas they will articulate. The synthesis of ideas is crucial to the process of working ideas and incorporating new ideas into their own thinking.

New ideas: Grasping new ideas through analysis and synthesis means that students can move ahead with their thinking and move closer towards transformation in learning and application. Information is not what makes a new idea. Information must be processed and applied before new ideas will emerge for students. Too many instructors remain at the information-exchange stage with students and do not move them towards new ideas. A blog can help develop these thinking skills as well as capture the new ideas well for others to view and absorb.

Application: Without application, new ideas are not "owned" by students in their learning. That is, new ideas can only become meaningful and relevant for students when then are directly applied in real life contexts of practice and use. This stage can also be well captured in a blog and, in fact, the entire thinking process of each student can be captured and made accessible for instructors and other students to explore.

Note: Each of these stages of thought development must be intentionally supported by instructors through comments and feedback and expectations communicated to each student. Additionally, grades should reflect the entire process of learning, not simply the end product, if students are to understand the value to their own learning.

3. Misuse of the environment
As I mentioned before, blogs are not wikis and they are not online discussion forums. The essential difference between a blog and other online tools is that it is intended to be an individual publication: a one-way monologue or self-post to which others may comment but do not contribute. The original post remains as the person who posted it wanted it to be. This is important to realize in the instructional setting. If a discussion is desired, then blogging would not be the tool of choice. In the same way, if journaling is the intended goal, then an online discussion forum would not be the tool of choice. It is important to realize, as an instructor, that if you desire a journal-type setting, then your comments should be supportive and constructive and not intrusive otherwise the student(s) will cease to post. Blogs can have a discussional nature if there are many subscribers and participants. That is, you can "hear" from every student on one topic or another by creating a blog ring to which they can subscribe. The self-posting, however, remains the same. That is, unlike a wiki, where changes can be made to posts and documents, in a blog, the initial post always stands and is simply responded to and not altered in any way. When using blogs to encourage students to articulate their thoughts students can become empowered and feel that they are developing their own voice in the learning process. Instructors can also "glimpse" students' thought processes and become much more aware of their learning journey.

4. Illusive grading practices
Grading of blogs should have clear rubrics so that students do not become confused as to how their work is being evaluated. As blog posts are essentially a series of statements, I have suggested elsewhere that, depending on the learning outcomes of your course, specific statement types to recognize in your assessment rubric might be:

* Reflection statements (self positioning within the course concepts);
* Commentary statements (effective use of the course content in discussion and analysis);
* New idea statements (synthesis of ideas to a higher level); and
* Application statements (direct use of the new ideas in a real life setting).

As already mentioned, blogging can move students forward in their thinking, help them process to a higher level of understanding, and apply the learning to a practical context. If the grading is not clear and the tool is simply made available to students, not only will students become discouraged, they will likely not participate. As I have seen on numerous occasions, it is when students continue regular use of the blog throughout a course that their learning is truly supported and their thinking truly challenged. It is, therefore, important to keep students focused with regular reminders and to keep expectations clear and grading transparent. Timelines for completion should also be set so that students know how much time they have to use the blog tool.

5. Inadequate time allocation
The notion of adequate time is not discussed often enough in the use of technology in learning. Just as students are different in their processing time within any learning context, so adequate time should be given for every student to complete work using online tools such as the blog. Instructors should be reasonable and if possible, leaving the blog tool open until the end of the course. This will help students maximize the benefits of the tool and will also provide more time for students who need it. As online tools provide a more immediate learning context for students, they also usually encourage more participation from students. This participation in turn provides more text or other response types from students and ultimately more for instructors to read through or view and grade. Therefore, instructors should plan ahead and plan well for the increased work that will likely take place when their students are using online tools.

Students should be fully aware of what the expectations are and how the tool is being used in their learning process. Once students understand this, they are more likely to participate and to a greater degree of critical awareness. While there are many mistakes that can be made in using any new tool in instruction, instructors should have a question and answer mindset in their use. It is important to find out what problems or challenges exist and to find solutions quickly. Instructors who use online tools must be innovative in their approach, creative in their course design, and flexible in their methods in order to ensure successful learning experiences for their students. While there is no one-way to use any instructional resource well, it is important to integrate the use of any tool or learning resources intro the overall course design intentionally and totally supporting the learning outcomes for the students.


Ruth Reynard is the director of faculty for Career Education Corp. She can be reached at rreynard@careered.com.

Cite this Site
Ruth Reynard, "Avoiding the 5 Most Common Mistakes in Using Blogs with Students," Campus Technology, 10/1/2008, http://www.campustechnology.com/article.aspx?aid=68089

Stanford Testing iPhone Application Suite

By Dian Schaffhauser
A suite of five software applications developed by students at Stanford University to run on Apple's iPhone is now being tested on campus. Two are for students, to manage course registration and bills. The other three will allow users to access Stanford's searchable campus map, get team scores and schedules, and check listings in the university's online directory, StanfordWho.

The university contracted with Terriblyclever Design, a startup company in San Francisco co-founded by Stanford student Kayvon Beykpour, to develop the suite of applications under the university's iApps Project. Beykpour is a junior majoring in computer science, and five of his company's six full-time employees also are undergraduates at Stanford.

During a pilot phase that launched recently, a select group of students who work in residential computing will test a beta version of the iPhone applications.

"We have talented students with good ideas about how they want to access administrative systems and services," said registrar Thomas Black, whose office is overseeing the project. "We want to harness their genius. We want to be able to say, 'You can come to Stanford, where students develop the applications that students use.'"

"We really were passionate about being more engaged in these systems," Beykpour said. "I am a student, and I use all these services, and I can't tell you how exciting it is to spend your time working in a capacity that you love working in--but also such that your final product affects your community."

Project leaders said the idea of letting students access key online systems and resources at Stanford via the iPhone began last May, when administrators in the registrar's office had a vision of introducing mobile applications that would enhance student life. The administrators then got in touch with Beykpour, and his company proceeded to develop the applications over the summer.

The university is offering a computer science course this fall titled, "iPhone Application Programming." The class currently has more than 80 students registered.

"We're offering this class because we think it provides students with a good way to exercise the foundations of computer science on an exciting new platform," said Mehran Sahami, an associate professor of computer science overseeing the course.

Acknowledging that security is a top priority, Tim Flood, director of student affairs information systems, said the same principles and practices currently governing the use of laptops and desktop computers at Stanford also will apply to mobile devices using the new applications. The applications will be compatible for anyone with an iPod touch as well.

Earlier in September Apple launched iPhone Developer University Program, for institutions looking to introduce curriculum for developing iPhone or iPod touch applications.


Dian Schaffhauser is a writer who covers technology and business. Send your higher education technology news to her at dian@dischaffhauser.com.

Cite this Site
Dian Schaffhauser, "Stanford Testing iPhone Application Suite," Campus Technology, 10/1/2008, http://www.campustechnology.com/article.aspx?aid=68044

Seton Hall Monitors Recruitment Dollars with Coremetrics

By Dian Schaffhauser
Seton Hall University is using Coremetrics to better track its investment in recruitment efforts. The Web-based service, which is primarily used by retailers, captures behavioral data to provide the university with insights on how to allocate marketing dollars across campaigns and channels.

"Our recruitment cycle is a long one, often taking as much as 18 to 36 months from the initial inquiry to enrollment in classes," said Robert Brosnan, director of Web and digital communications. "During the application process, we interact with prospective students through a variety of online and offline channels. Coremetrics gives us one place to go to get all the data we need and to look at our marketing efforts from a holistic perspective. We've learned some surprising things that would have been impossible to discover without the ability to attribute conversion to multiple channels. Our analysis showed, for example, that natural search plays a more significant role than expected deep into the recruitment cycle. As a result of this insight, we have made changes to our online application process and are rethinking our paid and natural search strategies."

The marketing data is used to measure the impact of both online and offline influences, including print advertising, direct mail and college recruitment fairs, for the school, which has 10,000 students.


Dian Schaffhauser is a writer who covers technology and business. Send your higher education technology news to her at dian@dischaffhauser.com.

Cite this Site
Dian Schaffhauser, "Seton Hall Monitors Recruitment Dollars with Coremetrics," Campus Technology, 9/30/2008, http://www.campustechnology.com/article.aspx?aid=68038

More Universities Sign with Hothand Wireless To Deliver Mobile Marketing

By Dian Schaffhauser
Hothand Wireless, which delivers "recreation" information to mobile devices, said it has added 10 university partners to its service, among them Georgia Institute of Technology, Ohio State University, and Stanford.

The company offers a mobile Web application with opt-in text messaging that lets students and others access information such as sports scores, standings, schedules, facility availability, contests, polls, and advertising on Web-enabled phones and other wireless devices. The network is sponsored by merchants that want to reach the university community, and in some cases, the company said, a portion of sponsorship funds or merchant fees are allocated to the school.

The service is available to all students and staff members in partner schools. Registration is required to access the service.

The company launched its University Mobile Network with a pilot program at the University of California Los Angeles which was sponsored by Best Buy Mobile. UCLA students also took advantage of special deals from Subway delivered to their phones.


Dian Schaffhauser is a writer who covers technology and business. Send your higher education technology news to her at dian@dischaffhauser.com.

Cite this Site
Dian Schaffhauser, "More Universities Sign with Hothand Wireless To Deliver Mobile Marketing," Campus Technology, 10/1/2008, http://www.campustechnology.com/article.aspx?aid=68034

Friday, May 9, 2008

XP SP3 Glitch a 'Gotcha' For IE7 & 8

By Stuart J. Johnston
May 7, 2008

Microsoft finally released Windows XP Service Pack 3 (SP3) to the general public, earlier this week, after a minor glitch or two the week before. It is an update that many XP users have been waiting impatiently for, for months.

Despite the fact that it's now available, however, the company still has a caveat for some users. If you have Internet Explorer 7 or 8 already installed, you may want to uninstall it before installing SP3. Then, if you wish, you can reinstall IE afterwards.

Why? As the 1990s buzz phrase goes: it's complicated.

At least that's the message in a posting made on Microsoft's (NASDAQ: MSFT) IE team blog this week.

It revolves around the fact that SP SP2 shipped with IE6. However, XP SP3 ships with a slightly different version of IE6. It also concerns the order in which the service pack and IE7 or IE8 are installed.

"If you choose to install XP SP3, Internet Explorer 7 will remain on your system after the install is complete. Your preferences will be retained. However, you will no longer be able to uninstall IE7," Jane Maliouta, deployment program manager for IE8, said in her blog post. The same goes for IE8, which is currently in beta test.

That's because the uninstallation process saves the wrong set of IE6 files on your hard disk, which would cause big problems later – so you're locked out of simply reverting to IE6.

The best way to handle the problem, Maliouta said, is to first, uninstall IE7, install XP SP3, and then reinstall IE7.

For the more adventurous who may have installed the beta test release of IE8, the warning counts double. Microsoft has set its download sites to not offer SP3 to users who already have IE8 installed – for good reason. If you install SP3 on top of IE8, as with IE7, you will no longer be able to uninstall the beta software.

"Since people are more likely to uninstall beta software, we strongly recommend uninstalling IE8 Beta 1 prior to upgrading to Windows XP SP3 to eliminate any deployment issues and install IE8 Beta 1 after XPSP3 is on your machine," Maliouta added.

Two analysts said they don't view the situation as a significant problem, but one said that it makes the update process more complex than it should be..

"I suppose there could be some applications that are affected, but I don't see it having any impact on most users," Michael Cherry, lead analyst for operating systems at researcher Directions on Microsoft, told InternetNews.com.

Roger Kay, president of analysis firm Endpoint Technologies, was of a similar mind.

"It sounds like a glitch [Microsoft] needs to fix, but it doesn't sound like a big deal," he said. "Still, a user shouldn't have to go through a lot of work to get it fixed," Kay added.

The company had planned to release XP SP3 last week, but that fell through after Microsoft found a clash between the service pack and Microsoft's Dynamics Retail Management System (RMS).

Microsoft announced on Monday it had put a filter in place so that XP SP3 is not offered to users with RMS installed. Then it released the service pack as planned. The company is working to come up with a solution for RMS users.

Thursday, May 8, 2008

What can you do with a second Ethernet port?

By: Nathan Willis

Purchase a new PC or motherboard soon, and the chances are good that it will come with two built-in network interfaces -- either two Ethernet jacks or one Ethernet and one Wi-Fi. Tossing in a second adapter is an inexpensive way for the manufacturer to add another bullet point to the product description -- but what exactly are you supposed to do with it? If you are running Linux, you have several alternatives.

Plugging another Ethernet cable into the second jack and hoping for the best will accomplish nothing; you have to configure Linux's networking subsystem to recognize both adapters, and you must tell the OS how to use them to send and receive traffic. You can do the latter step in several different ways, which is where all the fun comes in.

The big distinction between your options lies in the effect each has on the other devices on your network (computers, routers, and other appliances) -- intelligently routing network traffic between them, linking them together transparently, and so on. In some cases, the simplest end result is not the easiest to set up, so it pays to read through all of the alternatives before you decide which to tackle.
Bonding

From your network's perspective, the simplest option is channel bonding or "port trunking" -- combining both of the computer's interfaces into a single interface that looks like nothing out of the ordinary to your applications.

A combined logical interface can provide load balancing and fault tolerance. The OS can alternate which interface it uses to send traffic, or it can gracefully fail over between them in the event of a problem. You can even use it to balance your traffic between multiple wide area network (WAN) connections, such as DSL and cable, or dialup and your next door neighbor's unsecured Wi-Fi.

To bond two Ethernet interfaces, you must have the bonding module compiled for your kernel (which on a modern distro is almost a certainty), and the ifenslave package (which is a standard utility, although you might need to install it from from your distro's RPM or APT repository).

On a typical two-port motherboard, the Ethernet adapters are named eth0 and eth1, so we will use that for our example commands. With ifenslave installed, take both Ethernet adapters offline by running sudo ifdown eth0 and sudo ifdown eth1. Load the bonding module into the Linux kernel with modprobe. There are two important options to pass to the module: mode and miimon. Mode establishes the type of bond (round-robin, failover, and so on), and miimon establishes how often (in milliseconds) the links will be checked for failure. sudo modprobe bonding mode=0 miimon=100 will set up a round-robin configuration in which network packets alternate between the Ethernet adapters as they are sent out. The miimon value of 100 is a standard place to begin; you can adjust if it you really want to tweak your network.

To create an actual bond (which for convenience we'll call bond0), run sudo ifconfig bond0 192.168.1.100 up to assign an IP address to the bond, then run ifenslave bond0 eth0 followed by ifenslave bond0 eth1 to tie the physical Ethernet interfaces into it.

Round-robin mode is good for general purpose load balancing between the adapters, and if one of them fails, the link will stay active via the other. The other six mode options provide features for different setups. Mode 1, active backup, uses just one adapter until it fails, then switches to the other. Mode 2, balance XOR, tries to balance traffic by splitting up outgoing packets between the adapters, using the same one for each specific destination when possible. Mode 3, broadcast, sends out all traffic on every interface. Mode 4, dynamic link aggregation, uses a complex algorithm to aggregate adapters by speed and other settings. Mode 5, adaptive transmit load balancing, redistributes outgoing traffic on the fly based on current conditions. Mode 6, adaptive load balancing, does the same thing, but attempts to redistribute incoming traffic as well by sending out ARP updates.

The latter, complex modes are probably unnecessary for home use. If you have a lot of network traffic you are looking to manage, consult the bonding driver documentation. For most folks, bonding's fault tolerance and failover is a bigger gain than any increased link speed. For example, bonding two WAN links gives you load balancing and fault tolerance between them, but it does not double your upstream throughput, since each connection (such as a Web page HTTP request) has to take one or the other route.
Bridging

The bonding solution is unique in that both network adapters act like a single adapter for the use of the same machine. The other solutions use the two adapters in a manner that provides a new or different service to the rest of your network.

Bridging, for example, links the two network adapters so that Ethernet frames flow freely between them, just as if they were connected on a simple hub. All of the traffic heard on one interface is passed through to the other.

You can set up a bridge so that the computer itself does not participate in the network at all, essentially transforming the computer into an overpriced Ethernet repeater. But more likely you will want to access the Internet as well as bridge traffic between the ports. That isn't complicated, either.

Bridging requires the bridge-utils package, a standard component of every modern Linux distribution that provides the command-line utility brctl.

To create a bridge between your network adapters, begin by taking both adapters offline with the ifdown command. In our example eth0/eth1 setup, run sudo ifdown eth0 and sudo ifdown eth1 from the command line.

Next, create the bridge with sudo brctl addbr bridge0. The addbr command creates a new "virtual" network adapter named bridge0. You then connect your real network adapters to the bridge with addif: sudo brctl addif bridge0 eth0 adds the first adapter, and sudo brctl addif bridge0 eth1 adds the second.

Once configured, you activate the bridge0 virtual adapter just as you would a normal, physical Ethernet card. You can assign it a static IP address with a command like sudo ifconfig bridge0 192.168.1.100 netmask 255.255.255.0, or tell it to retrieve its configuration via DHCP with sudo dhclient bridge0.

You can then attach as many computers, hub, switches, and other devices as you want through the machine's Ethernet port, and they will all be able to see and communicate with each other. On the downside, if you have a lot of traffic, your computer will spend some extra energy passing all of those Ethernet frames back and forth across the two adapters.
Firewalling and gateway-ing

As long as you have excess traffic zipping through your computer, the OS might as well look at it and do something useful, such as filter it based on destination address, or cache repeatedly requested Web pages. And indeed, you can place your dual-port computer between your upstream cable or DSL connection and the rest of your local network, to serve as a simple Internet-connection-sharing gateway, or as a firewall that exerts control over the packets passing between the network interfaces.

First, you will need to bring both network adapters up and assign each a different IP address -- and, importantly, IP addresses that are on different subnets. For example, sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0 and sudo ifconfig eth1 192.168.2.100 netmask 255.255.255.0. Note that eth0's address is within the 192.168.1.x range, while eth1's is within 192.168.2.x. Maintain this separation when you add other devices to your network and you will keep things running smoothly.

Forwarding the packets between the Internet on one adapter and your LAN on the other is the purview of iptables, a tool for configuring the Linux kernel's IP filtering subsystem. The command sudo iptables -A FORWARD --in-interface eth1 --out-interface eth0 --source 192.168.2.0/255.255.255.0 -m state --state NEW -j ACCEPT allows computers on the LAN interface eth1 to start new connections, and forwards them to the outside world via the eth0 interface. Following that with sudo iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT keeps subsequent packets from those connections flowing smoothly as well.

Next, sudo iptables -A POSTROUTING -t nat -j MASQUERADE activates Network Address Translation (NAT), secretly rewriting the IP addresses of traffic from the LAN so that when it goes out to the Internet, it appears to originate from the Linux box performing the routing. This is a necessary evil for most home Internet connections, both because it allows you to use the private 192.168.x.x IP address block, and because many ISPs frown upon traffic coming from multiple computers.

Finally, run sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward to activate the kernel's packet forwarding.

This setup will pass traffic from your LAN to your Internet connection, but it does not configure the network settings on the LAN computers themselves. Each of them needs an IP address, gateway and network information, and some working DNS server addresses. If your dual-adapter Linux box is serving as a NAT gateway, you could easily have it provide that information to the clients as well, using DHCP. Your distro probably comes with the dhcpd package. Configuring dhcpd is beyond the scope of the subject here, but check your distro's documentation for Internet connection sharing and you will likely find the instructions you need.

Once you are comfortable using iptables to set up basic NAT and packet forwarding, you can dig a little deeper and learn how to use your box as a first-rate firewall by writing rules that filter traffic based on source and destination address, port, and protocol.
Isolating

Finally, you can always configure your secondary network adapter to work in complete isolation from the rest of your LAN.

Sure, there is little gain to such a setup for general-purpose computers, but it is a popular choice for certain Ethernet-connected devices that only need to send data to one destination. Homebrew digital video recorder builders use the technique to connect the HDHomerun HDTV receiver directly to a MythTV back end, thereby isolating the bandwidth-hogging MPEG streams from the LAN. The same traffic separation idea might also come in handy for other single-purpose devices, such as a dedicated network-attached storage (NAS) box, a networked security camera, or your Ethernet-connected houseplant.

For most devices, isolating your second adapter entails setting up the computer to act as a DHCP server as in the gateway example above, but without worrying about NAT rules routing between the secondary client and the rest of the network.
Caveat emptoring

So which technique is right for you? My advice is to think about what network trouble you most need to prepare for. If your dual-adapter box is a server with heavy traffic to handle, or you need to balance your traffic across two WAN connections, bonding is for you. On the other hand, if you just bought an HDHomeRun to add to your MythTV back end, think about attaching it directly to the spare interface.

Bridging and gatewaying are most similar, in that they use the dual-adapter box to connect multiple other devices into a single network. If that is what you need to do, consider that bridging works at the Ethernet link level, well below IP and TCP in the protocol stack. At the Ethernet level, the only sort of traffic shaping you can do is that based on the hardware MAC address of the computer. You have significantly more control when you run a full-fledged NAT gateway.

But whichever option you choose, remember that messing around with your network configuration can get you disconnected in a hurry if you make a mistake. For that reason, all of the above examples use commands that change the "live" system, but don't alter the configuration files Linux reads in at startup. If you make a mistake, a reboot should bring you back to a known working state.

If you decide you want to make your changes permanent, your best bet is to consult your distro's documentation. Distros vary slightly in where and how they store network configuration scripts (Red Hat uses /etc/sysconfig/network-scripts/, for example, while Ubuntu uses /etc/network/).

One you start digging into the details, you'll find even more possibilities for utilizing that second network adapter under Linux. But you should now be armed with a general idea of how to make both adapters talk to your network at the same time -- and you can do your part to eliminate network adapter wastefulness.

OfflineIMAP makes messages and attachments available locally

May 06, 2008 (9:00:00 AM) - 2 days, 5 hours ago

By: Ben Martin

OfflineIMAP allows you to read your email while you are not connected to the Internet. This is great when you are traveling and really need an attachment from a message but cannot connect to the Internet.

You can use OfflineIMAP to sync all your email during the night so that it is all instantly available when you wake up. This is a security trade-off -- you gain speed and availability for your email at the expense of having to properly protect the local copy of all the email that is created on your laptop.

OfflineIMAP is designed to contact your IMAP servers and create a local copy of your email in maildir format. You then point your mail client at the local maildir tree and use your mail client as normal. OfflineIMAP can then sync any changes, such as which messages you have read and deleted, back to the server. OfflineIMAP performs a bidirectional sync, so new messages from the server are downloaded to your local maildir while any changes you have made locally are sent to the IMAP server.

If your email client does not support maildir format, you can use OfflineIMAP to sync email between two IMAP servers and ultimately accomplish the same thing. This scenario is a little more complex, as you need to install an IMAP server on your laptop, tell your email client to connect to the IMAP server on localhost, then use OfflineIMAP to keep the IMAP server on your laptop in sync with your main IMAP server. An alternative is to use OfflineIMAP to sync to a maildir repository as normal and tell your local IMAP server to use that maildir as its email source. This thread contains information on setting up courier-imap locally to serve up your mail.

OfflineIMAP packages are available for openSUSE, Ubuntu Gutsy, and from the Fedora 7 and 8 repositories. If no packages exist for your distribution, the documentation provides good information on installation from source. I used OfflineIMAP 5.99.2 from the Fedora 8 repository. Version 5.99.2 does not support the Gmail account type. Version offlineimap-5.99.7 from the Fedora rawhide repository does support Gmail but has another bug relating to directory creation which causes synchronization to fail. For these reasons I would recommend using the IMAP account type and manually configuring it for Gmail until package repositories contain later versions of OfflineIMAP.

The primary configuration file for OfflineIMAP is $HOME/.offlineimaprc, and you can find a commented template configuration file online. The configuration file defines one or more accounts. For each account you must set the local and remote repository. A repository is configured in its own section and contains the type for storing email locally, or IMAP to connect to a mail server. When connecting to an IMAP server you can specify the hostname, username, and password, and whether OfflineIMAP should use SSL to connect to the IMAP server.

Configuration and setup is shown below. First I create the configuration file using the sample that comes with the offlineimap package. The accounts directive is set to contain a single Gmail account. This account has both a local and remote repository so that OfflineIMAP knows where to store email locally and what server to contact. The local repository is a maildir in my home directory. The remote repository uses the type IMAP instead of Gmail because of the version issues discussed above. I have selected an appropriate email address as the remoteuser so spambots will make themselves known. The nametrans directive lets you change the name of folders in the local repository. In this case I call re.sub twice to first change occurrences of INBOX, [Gmail], or [Google Mail] into root. One directory will be missed by this initial mapping, which is then accounted for by moving the Sent folder inside the root folder. This translation is useful because Evolution expects your inbox to be directly in the root folder of your IMAP account. If you change where the local copy of INBOX is stored, Evolution can more naturally interact with the local mail repository. You can also set up more elaborate folder name translations depending on your needs.

$ cp /.../offlineimap.conf ~/.offlineimaprc $ vi ~/.offlineimaprc accounts = my-gmail [Account my-gmail] localrepository = GMailLocalMaildirRepository remoterepository = GMailServerRepository [Repository GMailLocalMaildirRepository] type = Maildir localfolders = ~/.offlineimap-maildir-my-gmail sep = . restoreatime = no [Repository GMailServerRepository] type = IMAP remoteuser = i-am-a-spam-bot-log-me@gmail.com remotehost = imap.gmail.com ssl = yes remotepassfile = ~/.offlineimap-pass-my-gmail realdelete = no nametrans = lambda foldername: re.sub('^Sent$', 'root/Sent', re.sub('^(\[G.*ail\]|INBOX)', 'root', foldername)) ... $ mkdir -p ~/.offlineimap-maildir-my-gmail

With this configuration in place, just run offlineimap. It will check its metadata and notice that you haven't performed any previous sync and download everything from your IMAP server.

You should then have a complete copy of your email in maildir format on your local machine. See the client notes for information on configuring your email client to directly use the email from this maildir. When you want to send your changes back to the main IMAP server and check for new email, just run offlineimap again. Alternatively, you can use the autorefresh directive in ~/.offlineimaprc to tell offlineimap to continue to sync your accounts every n minutes.

Normally, you should run OfflineIMAP without any command-line options to bidirectionally synchronize your configured email accounts, but OfflineIMAP accepts some options that might be handy for casual use. The -a option accepts a comma-separated list of accounts that you wish to synchronize. This can be great if you are expecting a message but have some accounts defined that are slower to sync than others. The -u option lets you choose one of many interfaces to OfflineIMAP. The default is the Curses.Blinkenlights interface, which you might find to be too distracting. TTY.TTYUI displays a simpler and less distracting progress report. You can also change the interface that will be used by default by altering the ui directive in ~/.offlineimaprc. The -c option allows you to specify an alternate location to ~/.offlineimaprc for the configuration file.

Having the contents of your IMAP account available offline means you don't have to seek out an Internet connection just to get an attachment or wonder if a particular message has been cached locally by your email client. If you are working with moderate-sized attachments, the ability to schedule your laptop to grab your email an hour before you wake up can save precious time when you are traveling.

As the SCO rolls

By: Steven J. Vaughan-Nichols

Reality, as good writers know, is sometimes stranger than fiction. SCO's recent performance in the U.S. District Court in Utah is a perfect example. With years to prepare, SCO executives made some remarkable statements in their attempt to show that SCO, not Novell, owns Unix's copyright.

While this case is not about SCO's claims that IBM and other companies placed Unix IP (intellectual property) into Linux, Novell's attorneys decided that they would address this issue as well. One presumes that, since this may be their one and only chance to attack SCO's Linux claims in a courtroom -- what with SCO facing bankruptcy -- they decided to address this FUD once and for all.

Before getting to that, though, Novell hammered on Christopher Sontag, one time head of SCOSource, the division of SCO devoted to selling Unix's IP. Sontag, while dodging around what code SCO was actually selling -- UnixWare code or the whole Unix tree leading to UnixWare -- was finally cornered into admitting that SCO had received $16,680,000 from Microsoft and $9,143,450.63 from Sun and did not report these deals or income to Novell as it was required to do under the terms of the Novell/SCO APA (Asset Purchase Agreement).

On the second day of the hearing, April 30th, Sontag admitted that he did not "know if there's any code that is unique to UnixWare that is in Linux." He also admitted that he did not know of any analysis that showed there was any "legacy SVRX [Unix] software' in UnixWare." For someone who was in charge of SCO's Unix IP, who arranged to license it to Sun and Microsoft, and whose company was suing IBM for using Unix code in Linux, Sontag seemed remarkably ill-informed about exactly what it was that he was selling.

Sontag was followed on the witness stand by SCO CEO Darl McBride. With McBride on the stand, as can be seen in the trial's transcript, things became somewhat surreal. McBride, only minutes after Sontag said he didn't know if there was Unix or UnixWare code in Linux, said, "We have evidence System V is in Linux." McBride's most memorable moment came though when he claimed, after years of never being able to demonstrate any direct copying of Unix material into Linux that "Linux is a copy of UNIX, there is no difference [between them]."

In regards to SCO's May 2003 letter to companies that were using Linux and "Therefore, legal liability that may arise from the Linux development process may also rest with the end user," McBride claimed that "I don't see anything in here that says you have to take a license from us."

From there, McBride went on to say that simply because SCO had stated in this letter that "We intend to aggressively protect and enforce our rights" and added that the company had already sued IBM, that SCO didn't mean to imply that "we're going to go out and sue everybody else." At the time, most observers agreed that SCO certainly sounded like they were threatening to sue Linux end-users.

McBride then managed to entangle himself in how SCO accounted for the revenue it had received from Microsoft and Sun. The implication, which McBride vigorously denied, was that SCO had misled the stock-buying public in SEC documents in 2003 and 2004.

In what may prove to be a problem for Sun in the future, McBride also said that while SCO felt Sun had the right to open-source Unix in OpenSolaris, its most recent Sun contract was really about Sun "looking for ways to take their Solaris operating system and make it more compliant with the Intel chip set, which is what SCO has a deep history of doing."

Greg Jones, Novell's VP of Technology Law, was then sworn in. Jones testified that SCO's 2003 agreement with Sun "allows Sun, then, to release Solaris as open source under an open source licensing model, which they have done in a project called OpenSolaris. So it poses a direct competitive challenge to Linux and, certainly, to Novell, given that Linux is an important part of Novell's business. We are a Linux distributor."

Jones went on to say that if Novell had been aware of SCO making this deal with Sun, it would not have allowed it because, "It simply would not have been in Novell's commercial interests. In the fall of 2002, Novell had acquired Ximian, a Linux desktop company. We were exploring ways to get into the Linux market so enabling a competitor to Linux simply would not have been in Novell's interests. In the manner in which they entered this agreement, when they did it, they kept all the money. I assume that would have been their proposal but, fundamentally, it simply would have been contrary to Novell's business interests to enable something like this."

On the third day of the case SCO stuck to its guns, but added little more to their arguments.

On the case's final day, Novell simply stated that, when all was said and done, the APA made it clear that Novell, and not SCO, had the rights to Unix's IP. Therefore, SCO had no right to make these deals, and certainly no rights whatsoever to keep the funds from such deal.

In Novell's closing arguments, Novell also hit again on the SCO/Sun deal. Novell pointed out that "There's no question they (SCO) allowed Sun to open-source Solaris," and that while SCO executives would have you believe that giving Sun the right to open-source Solaris had no market value, SCO's engineers believed that open-sourcing Solaris had great value.

So, as the case moves on, SCO still seems unable to make any headway on its claims that the APA gave it the right to sell Unix's IP. Novell attorney's also made a point of demonstrating that SCO still has only naked claims, without any evidence, that there's any Unix code inside Linux. The Judge is expected to rule on the case in the near future.

Finally, Sun may yet have to contend with Novell's IP interests in OpenSolaris. Novell clearly doesn't believe Sun had the rights to open-source the System V code within OpenSolaris under its CDDL (Common Development and Distribution License).

Monday, April 28, 2008

Apple in rumored talks to license vibration feedback for iPhone

By Aidan Malley
Published: 02:45 PM EST

Apple has reportedly begun talks with Immersion to integrate haptic feedback into future touchscreen devices, addressing a complaint leveled against the iPhone by fans of physical buttons and keyboards.

An Apple worker has allegedly leaked to Palluxo that Immersion executives met twice with their Apple peers this week to discuss integrating Immersion's vibration response technology into the cellphone.

The meetings are said to parallel a more publicized deepening of relations between the two companies through indirect means: Immersion this week hired Clent Richardson, a former Worldwide Solutions Marketing VP at Apple between 1997 and 2001.

What exactly would be implemented is unclear. However, Immersion's most recent efforts have focused on using haptics to simulate physical button presses in an increasing number of touchscreen phones, including Sprint's soon-to-be iPhone rival, the Samsung Instinct.

The technique most frequently involves sending short, concentrated pulses through all or specific locations of a phone as the user taps buttons in software. In effect, haptics not only restore some of the feel absent in touchscreens, but also give users a way of confirming that the phone has recognized a command through more than just visual output.

A frequently-cited complaint regarding the iPhone is its lack of tactile feedback for converts from BlackBerries and other smartphones, many of whom expect the relative certainty of physical responses while navigating the interface or typing.

Neither Apple nor Immersion has commented on the rumored discussions, which are still early and so aren't expected to result in a finished product for some time.

Remote control function said hidden in iPhone beta firmware

By Aidan Malley
Published: 02:20 PM EST

Code in Apple's latest iPhone 2.0 beta firmware allegedly contains references to a utility that will let an iPhone or iPod touch play media from nearby iTunes sources

Pointing only to an unnamed person as the source for its leak, TUAW claims that multiple string entries in the cellphone's beta code refer to selecting from different media categories and include mentions of dialog boxes that let users choose their particular source.

The information suggests that the additions are the groundwork for an Apple TV-like feature, purportedly named iControl, that would let users play any iTunes content over a local network with features similarly to the dedicated media hub already on store shelves.

It's unclear as to whether the feature is strictly intended for streaming local content to the iPhone itself or can actively steer other devices, imitating a more advanced home theater remote such as Logitech's Harmony.

However, the listings as they appear would indicate a direct connection to a 2.5-year-old patent filing submitted by Apple in late 2005.

In the filing, the iPhone maker says it has invented a method that would let a portable media player view the contents of a local media server, such as a computer, and send instructions telling the media server to change tracks while it outputs content to a separate media receiver, whether physically attached to the computer (such as speakers) or remote (such as an Airport Express-like connection).

The aim is to let users steer media playback in a networked media system with existing hardware rather than dedicated controllers, Apple engineers state in the earlier patent.

Regardless of its actual purpose of iControl, there are no clues as to when, if at all, the software will be released for the company's handheld devices.

Europe's not finished with Microsoft

Microsoft's troubles in Europe are far from over, as Neelie Kroes, The EU competition commissioner, has warned. We review the past and future options for Microsoft and the European Commission.

Posted Richard Hillesley at 2:34PM, 24th April 2008

The 80s were the dog-eat-dog days of business. Top of the pile was Microsoft, the biggest and baddest of them all, led by Bill Gates, who invented the computer, the universe and everything.

Gates looked a bit like the nerd on the cover of Mad Magazine, made it to the cover of Time magazine, and was rich and successful beyond anybody's wildest dreams.

The view of Gates and Microsoft from inside the computer industry was more circumspect. PC software looked amateurish and nobody took it too seriously until the cultures began to collide in the business world during the middle of the decade. The affordable desktop computer, which sprang out of an unholy alliance between IBM, Intel and Microsoft, changed the face of computing in the home and in the work place, and for the most part was beneficial to the user, if only because it was cheap and accessible.

Microsoft always took more credit for this revolution than it probably deserved, but had a way of coming out on top, which owed everything to its early dominance of the operating system market for the IBM PC and its clones. From this dominance grew its prominence on the desktop, and the gradual eclipse of its competitors. The question that was always being asked of Microsoft was how much did the company owe its success to the quality of its software, and how much to the ruthlessness of its marketing?

From the beginning Microsoft had a special relationship with the original equipment manufacturers (OEMS), and made this relationship tell. Each innovation on the desktop, each new tool and the company that made it, either fell by the wayside or was assimilated into the Microsoft hive.

In the hive

Compaq had its arm twisted to stop it bundling Apple's Quicktime on the desktop. Internet Explorer, and later, the Windows Media Player, were bundled into the operating system, and given away free, sucking revenues and market share from Netscape, Real Networks and Apple. The squashing of Netscape and the subsequent death of the browser market led to Microsoft's conviction for monopolistic behaviour before the US antitrust courts.

Microsoft added platform-dependent "features" to Java to render Java's multi-platform features redundant, and when that ended up in court, developed the .NET platform, a very successful and popular alternative that reproduced many of the major features of Java with the notable exception of its multi-platform capabilities.

Kerberos, the encryption standard developed by MIT, was extended by Microsoft with the apparent objective of inhibiting interoperability in the workgroup server space and, in the words of Jeremy Allison of Samba: "these changes were treated as trade secrets, patented if possible, and only released under restrictive non-disclosure agreements, if released at all."

During the US anti-trust trials, Steven McGeady, a vice president of Intel, testified against Microsoft, Intel's most important trading partner, asserting that Microsoft intended to "embrace, extend and extinguish" competition by substituting open standards with proprietary protocols, and claimed that Intel had been warned to cease development of its Native Signal Processing audio and video technology, which promised to vastly improve user experience of the desktop - or else Microsoft would bypass Intel and develop Windows exclusively for AMD and National Semiconductor chips. "It was clear to us that if this chip did not run Windows it would be useless in the marketplace," McGeady testified. "The threat was both credible and terrifying."

Microsoft has always had an ambivalent relationship with the concept of interoperability and with the standards that make interoperability possible, tending to view the protocols and data formats it uses as "de facto" standards and "trade secrets" which it is free to "extend" with no obligation to share. This may not always be deliberate behaviour. Where there is a monopoly standards become incidental, an option rather an obligation. This tendency has been at the root of Microsoft's problems in the US and European courts. Microsoft is not being penalised for success, but for shutting the door on competition, and resisting any requests to modify its behaviour.

Into Europe

Microsoft's troubles in Europe began as early as 1993, when Novell complained that "onerous licensing conditions" imposed on OEMs by Microsoft was pushing NetWare out of the workgroup market.

In this market Novell had been the innovator, but Microsoft had muscled a napping, but still relevant, Novell out of the picture. Thus began a long history of litigation which culminated in the 17 September 2007 decision of the European Court of First Instance, which upheld the European Commission's decision to fine Microsoft and uphold the principle of interoperability.

The September judgement came at the end of a ten year case initiated by Real Networks, supported by Sun Microsystems, Novell and others, all arguing that innovative products were being pushed out of the market on the back of Microsoft's monopoly. Over the years each of these litigants withdrew from the case after doing deals with Microsoft worth billions of dollars, leaving the Free Software Foundation Europe (FSFE), the Samba Team, and their allies to fight the case to the finish.

As Jeremy Allison of the Samba Team told Groklaw: "the copyright in Samba is spread across many, many individuals, all of whom contributed under the GNU GPL 'v2 or later', now 'v3 or later' licenses. You can't buy that. There's nothing to sell. There's no point of agreement for which to say 'here are the rights to Samba, we'll go away'. We're in the, some would say unique, some would say unenviable position, of not being able to sell out. We can't be bought."

Much has been made of the Commission's insistence that Microsoft offer a version of Windows without Windows Media Player bundled, and the record fines imposed upon Microsoft. Improbably, some press coverage suggested that the European decision was a blow against innovation and competition. But the fines mean little more than a few pence on the price of Windows to a company as rich as Microsoft. The fines are a penance for Microsoft's prevarications and refusal to comply with the European courts.

The most important part of the judgement was the Commission's insistence that Microsoft be forced to publish the protocols used by Windows clients and servers under "reasonable" and "non-discriminatory" terms.

For this decision to have any meaning it was incumbent upon Microsoft to publish the protocols in their entirety, and to reflect the actual behaviour of Microsoft servers and clients in the real world - without evasion, inconsistencies, broken standards, obfuscations, fees or hidden patents - to comply with the commonly understood meaning of open standards and protocols as they have been implemented by other participants in the computing industry.

Microsoft has complied, with reservations, releasing protocols and data formats free for "non-commercial" use, (which immediately discriminates against competition), and making promises of future interoperability with its products. Unfortunately the promises have come with limitations, and the limitations target free and open source software.

As Thomas Vinje of ECIS, noted: "For years now, Microsoft has either failed to implement or has actively corrupted a range of truly open standards adopted and implemented by the rest of the industry. Unless and until that behaviour stops, today's words mean nothing."

Bursting the bubble

It is worth noting that once Netscape was trounced and Microsoft assumed the monopoly position in the browser market there was a five year gap of no innovation or competition between the release of IE6 and IE7. The subsequent release of IE7 was almost certainly prompted by the rapid rise of the open source browser, Firefox, and was notable for its failure to comply with W3C standards. Domminance of a market by a proprietary monopoly does not encourage innovation.

Throughout the European Commission's proceedings Microsoft claimed that the protocols were proprietary to Microsoft, and talked of protocols that were enclosed in a "blue bubble". Georg Greve, president of the FSFE explained: "The blue bubble was a theory that Microsoft invented in order to justify that it had kept parts of the protocol secret. They said that there's a difference between the internal protocols and the external protocols, if you want to describe them like that. They said that certain protocols that are so secret that they are in this blue bubble, because they had visualized this with a blue bubble, that this could never be shared without actually sharing source code, without sharing how the program exactly works. These protocols were so special that somehow, magically, you had to have the same source code to actually make that work. That was the blue bubble theory. So they said things like, 'HTML is outside the blue bubble, but the things you want us to disclose, that is inside the blue bubble.'"

In the wake of the decision, the US Assistant Attorney General for Antitrust, Thomas Barnett, made the highly contentious claim that the outcome, "rather than helping consumers, may have the unfortunate consequence of harming consumers by chilling innovation and discouraging competition," which drew a clear response from the EU competition commissioner, Neelie Kroes, that it was "totally unacceptable that a representative of the US administration criticised an independent court of law outside its jurisdiction."

In contrast, the American Antitrust Institute noted "the oddity of Barnett's statement" as both Europe and the US had found that Microsoft was "a monopolist which had acted to harm competition, and both insisted on interoperability in framing a remedy," and noted that "the EC has appropriately targeted strategies that would have the effect of deterring investment in innovations that might lead to a reduction of the monopolist's power and new benefits for consumers."

Talk is cheap

As the kerfuffle surrounding MS-OOXML demonstrates, the publication of protocols and data formats is not enough. To become truly universal, proprietary interest must be relinquished, and interoperability frameworks opened up to discussion, contribution and maintenance by third-parties through a neutral party (usually a standards body), and this is something that the European commissioners are beginning to understand.

As the MS-OOMXl kerfuffle has also demonstrated, such processes are highly political, and like the political process, can be influenced and misled.

But for the moment, Microsoft's tribulations in Europe are far from over. The Commission is investigating a complaint from Opera Software demanding that Internet Explorer comply with W3C standards, and one from the industry body, ECIS (European Committee for Interoperable Systems), in which Microsoft is alleged to have "illegally refused to disclose interoperability information across a broad range of products, including information related to its Office suite, a number of its server products, and also in relation to the so called .NET Framework.

The Commission's examination will therefore focus on all these areas, including the question whether Microsoft's new file format Office Open XML, as implemented in Office, is sufficiently interoperable with competitors' products."

In a press conference to announce Microsoft's latest fine, the EU competition commissioner, Neelie Kroes, emphasised that "a press release does not necessarily equal a change in a business practice. And if change is needed... then the change will need to be in the market, not in the rhetoric."

She also said: "There are lessons that I hope Microsoft and any other company contemplating similar illegal action, will learn.

. Talk, as you know, is cheap; flouting the rules is expensive.

. We don't want talk and promises, we want compliance.

. If you flout the rules you will be caught, and it will cost you dear."

Proprietary protocols are anathema to network computing and a deliberate hindrance to innovation and competition in computing environments. Few of the players, or users, maintain the illusion that a Microsoft-only world is either desirable or attractive - and the accusations of ballot stuffing, bribery, and undue political influence that surrounded the acceptance of OOXML as a standard by the ISO has only served to emphasise this reality.

Thursday, April 24, 2008

Apple's ultra-thin MacBook Air also slim on profits?

By Slash Lane
Published: 10:00 AM EST

In its determination to deliver the world's thinnest notebook, Apple admitted to sacrificing some speed and versatility, but a new analysis suggests that it may have given up some early profits as well.

Though the Cupertino-based Mac maker largely beat estimates for its second fiscal quarter on Wednesday, one sore spot appeared to be gross margin, which came in at about 100 to 200 basis points below most analysts' expectations at 32.9 percent.

An ensuing conference call was thus dominated by matter, as Wall Street folk routinely pelted management with questions on the perceived shortcoming as they sought a better understanding for their models going forward.

While management largely attributed the near 2 percent margin decline from the prior quarter to February's iPod shuffle price cut and a routine falloff in sales of Mac OS X Leopard and iWork, Piper Jaffray analyst Gene Munster offered his own explanation.

"We believe the margin outlook may be viewed negatively by investors, who likely wanted to see more of Apple's significant revenue upside trickle down to earnings," he wrote in a note to clients early Thursday morning. "The bottom line, we believe the margin was negativity impacted by a higher mix of Mac Book Air, which we now believe carries a lower margin."

On the bright side, Apple has likely built the potential for margin expansion into its MacBook Air design as adoption swells and component prices fall. What's more, Apple management appeared upbeat in stating that the Air has thus far shown little to no cannibalization effect on the company's other notebook offerings and thus could be considered largely responsible for helping push Mac unit growth to its highest rate in nearly two decades.

"The key takeaway from Apple's March quarter is that the Mac units grew at the highest year-over-year rates (units 51 percent and revenue 54 percent) in 17 years," Munster added in his note to clients. "Macs are the most meaningful category with the most potential and they are performing the best."

Looking ahead, the Piper Jaffray analyst said he's modeling conservatively for Mac growth rates to decline to 12 percent year-over-year for the remainder of calendar year 2008, which leaves "ample room for positive estimate revisions over the next 8 months."

"Mac growth is accelerating despite multiple quarters of strong growth, iPod sales are stabilizing with higher average-selling-prices due to the touch, and the iPhone will be significant in the second half of the year with the release of new hardware and software," he wrote.

Bacula: backups that don't suck

By Robert D. Currier on April 23, 2008 (9:00:00 AM)

Good systems administrators know that implementing a robust backup procedure is one of their most important duties. Unfortunately, it's also one of the most complex and least fun. When the phone rings and there's a panic-stricken user on the other end who has just lost a crucial document, you need to be confident that you can promptly recover his missing files. Failure to do so can bring about a speedy end to a promising career in systems administration. So what's a budding sysadmin to do? Download the latest release of Bacula and watch those backup woes disappear into the dark of night.

Led by head developer Kern Sibbald, the Bacula team has built an open source backup solution that is fast, reliable, and exceptionally configurable. Bacula is not a monolithic application, but rather a collection of programs that together provide a robust backup, recovery, and verification toolset suitable for five or 500 systems.
Getting started

For this review we tested Bacula on a single CentOS 4 server using the file system as our backup medium. In our production environment we use Bacula to manage more than 500GB of backups from multiple clients using a tape robot. However, the lengthy process of configuring the tape robot and multiple clients makes this a daunting task for the first-time user. We recommend you stick with a single host for your test drive if you've not worked with Bacula before.

Bacula is available as a package using the standard package management tools yum or apt-get, but we prefer to install the application from source. When you're just getting started with Bacula, building from source gives you a better feel for how the application operates.

After downloading and uncompressing the project's source code you'll need to run the configure script. Bacula's configure script is well written and produces meaningful debug output but requires a large number of command-line settings. To simplify the configuration process the development team has included a suggested list of options that handle most environments. Start with these settings and modify them where appropriate. Case in point: the choice of database. Bacula is a database-driven application and requires MySQL, SQLite, or PostgreSQL. Make sure you have one of these databases installed prior to configuring Bacula. Once you've run the configure script, you can build Bacula using the normal make; make install process.

Next, you must create the Bacula database and tables and set appropriate access permissions. Located in Bacula's bin directory, the script create_bacula_database will create Bacula's database after determining what database you are using. After a successful termination of this script, executing make_bacula_tables will create and populate the database tables. Finally, grant_bacula_privileges will establish the necessary access controls. A word of warning: grant_bacula_privileges creates an unrestricted access policy for the user bacula. You should modify this policy to suit your needs. At a minimum you should consider setting passwords for the MySQL users root and bacula.

After successfully building and installing Bacula, the next step is setting up the config files. That can take some time, but once you've gotten them working you won't have to touch them again except when you add a new client or fileset.

To understand the configuration files, you need to understand how Bacula operates. Bacula comprises four main modules: the director, the storage daemon, the file daemon, and the console. The director is the "boss" of Bacula, providing job scheduling, backup media descriptions, and access control. In a typical Bacula deployment there is only one director.

The storage daemon handles all communications with the defined backup architecture: disks, single tape drives, tape robots, and optical drives. As with the director, there is usually only one storage daemon running per Bacula installation, but the storage daemon may have many backup devices defined.

The file daemon is installed on each client machine and provides the communications link between the storage daemon and the client. It needs access to all files that will be backed up on that client.

The console handles communications between the administrator and Bacula. The adminstrator can start or stop jobs, estimate backup sizes, and review messages from Bacula. Consoles are available that use wxWidgets, GNOME, and Web browsers, but we prefer the TTY version of bconsole.

The bacula-dir.conf file, which controls the director, contains detailed information on the clients to be backed up, job definitions, filesets, and job schedules. One of the outstanding features of Bacula is how close the configuration files as supplied come to being ready to rumble. While a large installation with many clients will require significant editing to the configuration files, our test of Bacula required us to only to give Bacula a list of files to back up.

Bacula typically installs to a subdirectory named bacula in the home directory of the user that is deploying the software. The configuration files are located in the bin subdirectory of /home/user/bacula. Switch to this directory and edit bacula-dir.conf. Search for the section of text beginning with "# By default this is defined to point to the Bacula build directory to give a reasonable FileSet to backup to disk storage during initial testing." Directly below these lines should be: File = /home/username/bacula/bacula-2.2.8. This is the FileSet definition, which controls what files and directories are to be backed up. You may change this definition to a directory of your choice or leave it as is during testing.
Your first backup

By default Bacula uses the filesystem as its backup media. To keep things simple we won't attempt to configure Bacula to use a tape drive -- we'll stick with the preconfigured definitions.

Change directories to Bacula's bin folder. Execute the bacula script with start as an argument: ./bacula start. You should see the following three lines:
# Starting the Bacula Storage daemon.
# Starting the Bacula File daemon.
# Starting the Bacula Director daemon.

If Bacula successfully started, that's great -- you're ready to run your first backup. If not, carefully read the error messages and double-check the bacula-dir.conf file. Make sure you've pointed Bacula to the directory you wish to be backed up and that the directory exists and is readable.

The final step in taking Bacula for a test drive is using the console to initiate the job. From the Bacula bin directory execute the bconsole script. Bconsole should return an asterisk prompt. At the prompt enter run. Bconsole will display a list of defined jobs for you to pick from. Since we're only backing up one machine (our test box) you should only have one job resource to choose from. Select the job and press Enter. Bacula will prompt you with a short list of settings, including client name, backup type (full, differential, or incremental) and the storage device. If all the settings look correct, enter yes to kick off the backup.

After a short wait, Bacula should return a "backup completed successfully" message with details including the file(s) backed up, the amount of space the backup consumed, compression ratios, and so on. Congratulations -- you've just run your first Bacula backup.
Only the beginning

Of course at this point we've barely scratched the surface of Bacula's many features -- the user manual is 665 pages long. New users should read the excellent tutorial before embarking on a multi-client Bacula installation.

Despite its apparent complexity, Bacula is straightforward to install and configure, comes with excellent documentation, and works right out of the box. If you have been considering moving from basic tar or rdump backup processes to a more substantial package, you can't go wrong by choosing Bacula. It definitely doesn't suck.

Wednesday, April 23, 2008

Notebook Company Tech Support Comparison

by Kevin O'Brien

Computer problems can be one of the most frustrating situations any person can go through, especially if it is your primary computer for school or work. Downtime can cause missed assignments, projects, or worse if you manage all your finances through a computer and can't pay a bill on time. We decided to call up Dell, Toshiba, Lenovo, HP, Gateway, and Apple to who see was the easiest to deal with, and how long the average call was.

For our calls, we scored each company in multiple areas, including menu navigation, how long until you left the menu system, time to service rep, and total call length. The basic question asked to each company was, "Is there a way to manually eject a CD stuck inside my optical drive". The expected answer was using a paperclip in the manual eject hole, but as you might expect, not all companies came to that answer right away. In one case we were offered a brand new drive at first, and in another we ended up being cut off after 35 minutes on hold.

Dell Support Call

For our Dell support call, we used the Home and Home Office contact number listed on the site. This was 1-800-624-9896, which listed 24/7 availability. This number was fairly easy to find using Google, and a bit of navigating on the Dell site.

The first interaction the user gets is a voice activated menu system which was not very difficult to navigate. It took me about 30 seconds to get through the system, and 10 seconds later I was speaking to a service rep. The service representative was very friendly, and as soon as I mentioned my problem he prompted me to find a paper clip and insert it into the small hole on the side of the CD tray. After thanking him for his help, the total time on the phone was 3 minutes and 21 seconds.

Toshiba Support Call

For the Toshiba support call, we used the computers support line, listed as 1-800-457-7777. The website does not display if this is a 24/7 support line on the main "Contact Toshiba" webpage, but it worked for our Saturday afternoon call.

The support line was very easy to navigate, using all phone prompts to navigate the menus. Total time to navigate the menus and be routed to a support representative was 60 seconds. The representative was very helpful, and immediately knew about the pinhole on the side of the CD tray. Total time of the call was 3 minutes and 40 seconds.

HP Support Call

For our HP support call, I used the standard support line, which offered 24/7 availability. The number listed was 1-800-474-6836. Finding this number was very simple, using both Google and a little site navigation.

The HP support line was all voice navigated and the most frustrating to navigate. You are prompted for the type of product, as well as the product model name. In the case of our dv6500t test notebook, it was quite challenging to get the system to understand what I was saying. The voice prompt misunderstood me three times when I said "DV...." and routed me directly to the TV technical support center. On the 4th try, speaking very carefully, I finally got it to recognize what I was saying. After 4 minutes, I had finally made it to a human.

The technical service representative was frustrating to talk with, and would not assist me without a valid serial number. With some poking and prodding, he put me on hold to talk with his manager to find out if he was allowed to tell me how to manually eject a CD without a verified serial number. After 4 minutes of being placed on hold, he came back and thought that a paper clip used on the tray release hole would do the trick. The total time on the phone was 12 minutes and 30 seconds.

Lenovo Support Call

For our Lenovo support call, we used the United States 24/7 support line. The listed number was 1-800-426-7378, with about 200 additional numbers depending on what country you were located. Finding this list was quite simple using the help of Google and minor site navigation.

One quirk that cropped up with the Lenovo support line was when I first attempted to use my Skype VOIP line from my home. When the support number was dialed, the phone just rang and rang with no pickup from the other end. When I switched to my cell phone to make the call, the line picked up on the first ring.

The Lenovo phone system used a combination of voice and phone prompts to navigate the system. It took about 60 seconds to get routed to a human on the system. The representative required all of my computer and personal contact information before he would start, which added a bit of time to the service call.

After 4 minutes, I started to explain the problem, and received a very odd answer. He explained the drive did not have a manual release to eject a stuck CD, but they would be more than willing to send out a new drive for my notebook. While a new drive would be nice, what about my precious CD that was stuck? After additional hinting towards a possible fix, he finally suggested that a bent paperclip might work using the manual tray release hole. The total time on the phone was 7 minutes and 20 seconds.

Apple Support Call

The Apple technical support line was very easy to find, and it was listed as 1-800-275-2273. The Support page did not shot that the line was 24/7, but it worked just great on the weekend when I called.

Let me preface this support call by saying that I did not expect a quick answer for my standard stuck CD question. The MacBook Pro does not have a quick release on the CD tray, and it requires service to fix. I was hoping for a quick answer stating that fact, and hopefully some recommendation on the closest Apple Store.

The phone system used both voice and phone prompts to navigate to the correct area, and quickly routed me to a human in about 60 seconds. The service rep was very friendly, and was quick to help me once I gave him my information. When I explained the problem of the stuck CD, he asked questions about the noises the system was making, and if dragging the disc to the trash bin would eject the CD.

When I explained that the CD never fully clicked into position, he put me on hold to further research the problem. I was expecting a quick return, and explanation that it would require service, but I never heard from the guy again. I was on hold for 35 minutes and at the 39 minute total time mark, was disconnected from the call and routed back to the original Apple technical service menu. If I was real customer I would have gone crazy at that time.

Since the service was so bad the first time, I gave it another shot the next day during normal business hours. Getting through the phone prompts were just as easy as before and this time around the lady whom I spoke with seemed to be more eager to help me out. We went through some of the same troubleshooting steps such as pressing the eject button, restarting the computer, and so on. In the end she decided to send me to an Apple store for further help, and went as far as scheduling an appointment at my local genius bar. Total time was 6 minutes.

Gateway Support Call

Gateway technical support is set up differently from other manufacturers; they have different contact numbers depending on if the system was purchased directly from Gateway or instead, a retail store. The number I called was the retail support line, which was 1-408-273-0808.

The Gateway support line leads you through multiple voice prompts that require you to explain your intentions as well as share your model number. During this call the system did require multiple corrections, but unlike the HP line, never transferred me to a different support area. In all it took about 2 minutes to get through the voice prompts and finally speak with an agent. The support agent was very friendly and, even though I did not have my serial number handy, helped me along with my support request. Without referencing any support material he knew about the manual release off the top of his head, and quickly solved my problem. With a friendly reminder about locating my serial number in the future, I was done with the call in 4 minutes and 33 seconds.

Conclusion

Besides realizing that I have way too much free time on the weekends, I found that simple things can make the technical support experience either wonderful or frustrating. The interactive voice prompts were hands down the worst part of most calls. With HP, I was rerouted to the wrong area multiple times because the computer kept thinking my "DV6500t" model number meant I was saying "TV", which then cut me off and routed me to the wrong system. In others you had to pronounce the category you wanted multiple times before it would understand you. Having a simple "press 1 for computers ..." made the interaction much easier.

Overall, Dell and Toshiba were the best for ease of access and quick resolution. They had the easiest phone menu systems to navigate, fastest times to talk with an agent, and the shortest overall call length. Gateway ranked 3rd, with the phone prompts being the only negative aspect of the phone call. Lenovo was also very good, but a new optical drive isn't always the best answer when you are trying to get work done now. Apple ranked in the middle with Lenovo when you averaged the poor call experience with the excellent call the next day. HP came in last with the frustrating phone system that was nearly impossible to navigate. While your experience could vary greatly depending on the individual service representative, we hope this gives you an idea of how each company handles support and what to expect from each of them.

Monday, April 21, 2008

PayPal may block Safari users

By Aidan Malley
Published: 06:20 PM EST

As part of a multi-tiered approach to guarding against online fraud on its site, PayPal says it will block the use of any web browser that doesn't provided added validation measures, potentially restricting the current version of Safari from the e-commerce site.

The money transfer service's Chief Information Security Officer, Michael Barrett, makes the new policy clear in a white paper (PDF) posted this week, which highlights the browser as a key means of putting an end to phishing (false website) scams alongside such steps as blocking fraudulent e-mail messages and criminal charges.

When addressing web access, Barrett argues that any user visiting a financial site such as PayPal should know not only that their browser will block fake sites meant to steal information, but also that the browser can properly indicate a legitimate site. Without either precaution, visitors may not only be victims of scams but may lose all trust in an otherwise safe business. This doubly harmful outcome is likened to a car crash without protection.

"In our view, letting users view the PayPal site on one of these browsers is equal to a car manufacturer allowing drivers to buy one of their vehicles without seatbelts," the expert says.

To that end, PayPal is said to be implementing steps that will first provide warnings against, and eventually block, any browser that doesn't meet these criteria.

Most modern web browsers, including Firefox and newer versions of Microsoft's Internet Explorer, are able to support at least basic blocking of phishing sites. The newest, such as Internet Explorer 7 or the upcoming Firefox 3, also support a new feature known as an Extended Validation Secure Socket Layer (EV SSL) certificate. The measure of authenticity turns the address bar green and identifies the company running the site, letting the user know any secure transactions are genuine.

Safari, however, lacks either of these features and so could fall prey to the blocks and warning messages. Barrett doesn't mention the browser by name but notes that any "very old and vulnerable" software would ultimately be blacklisted from the future update to PayPal's service, placing Safari in the same category of dangerous clients as Microsoft's ten-year-old Internet Explorer 4.

Apple's approach to browser security has so far been tentative. The Mac maker has briefly incorporated Google's database of fraudulent sites into a beta builds of Mac OS X Leopard this past fall, only to pull the feature in later test versions. Release builds of the stand-alone browser for both Macs and Windows PCs have also gone without the anti-phishing warnings, but notably leave code traces inside the software that raise the possiblity of improvements through a later update.

Apple hasn't responded to the white paper but is likely to face pressure as PayPal and similar institutions ask for an all-encompassing approach to fighting scams that involves EV SSL and other software techniques. Internet Explorer 7's debut has already had a demonstrated effect on customers, who are more likely to finish signing up for PayPal knowing that the web browser has authenticated the registration page.

"We couldn’t eradicate this problem on our own – to make a dent in phishing, it would take collaboration with the Internet industry, law enforcement, and government around the world," Barrett explains.

Open source testing tools target varied tasks

April 18, 2008 (4:00:00 PM)
By: Mayank Sharma

Testing is an important function of the software development process, no matter how big or small the development project. But not every company or developer has access to professional testing tools, which can run into hundreds and even thousands of dollars. The good news is that they don't need them, thanks to the tons of freely available open source software testing tools.

In simple terms there are two major approaches to testing software -- the manual way (a summer intern with a checklist) or through an automated program. With automated testing programs, you can spend a lot of money procuring these tools or distracting yourself from the task at hand by rolling out your customized automated testing software.

Instead, you could head over to sites like Open Source Testing (OST), QAForums, Open Testware, and others that catalog various testing tools and look for something that works for you.

"The largest category of open source testing tools," says Mark Aberdour, who manages OST, "is functional testing tools. This can cover a range of practices from capture-replay to data-driven tests, from Web application testing to Java application testing, and lots more in between." Aberdour, who before his current software development role spent 10 years on the other side of the fence in software testing and test management, says that the list of open source testing tools also includes many performance testing tools and test management/defect tracking tools, as well as a good number of security testing tools.

"If you include unit testing tools, then there are large numbers of tools for the more popular languages, particularly where test-driven development (TDD) is more popular," he says. There are several tools for testing Web and Java applications, but "as is the way in open source," Aberdour says, "if there's an itch, someone will scratch it, so there are tools available for all manner of obscure needs." OST lists testing tools for languages such as PHP, Perl, Ruby, Flash/ActionScript, JavaScript, Python, Tcl, XML, and so on. "The list is probably bigger than I've brought together on OST, and TDD practitioners should head over to testdriven.com, which has more focus on that area."

So how do they compare with the expensive proprietary tools? "In some cases, very well," says Aberdour. He points out WebLOAD and OpenSTA as examples that hold up well in the performance testing market -- no surprise there, since they were both originally commercial tools. Underlining his point, Aberdour says, "You have tools like Wireshark, which is huge in the security market, and Bugzilla and Mantis in the defect tracking sector. In functional testing there are a number of really great tools (Selenium, Abbot, Jameleon, jWebUnit, Marathon, Sahi, soapui, and Watin/Watir to name a few) with strong feature sets."

According to Aberdour, in addition to the tool itself, the advantage of the commercial tools is often their integration with an automated software quality (ASQ) suite, and of course an established company behind them, which will feature in a lot of people's selection criteria. Here again things are looking good for open source testing tools. Aberdour points out the RTH test case tool which integrates with Watir, HttpUnit, JUnit, MaxQ and the commercial WinRunner. Even bug trackers like Mantis and Bugzilla integrate with functional testing tools and test case tracking tools.

When it comes to reliability and accuracy, Aberdour says the classic open source arguments apply. The popular tools are tested and used by a high number of people, and many of the tools have evolved large and sustainable communities with many people feeding back on quality-related issues. "For many of these tools, innovation is high and release processes rapid. Yet it is not a free-for-all -- the code base may be open, but write access to the repository is closely guarded, and developers have to earn the right to commit."

According to Aberdour, the main issue for people using the tools is not whether their need is served, since in most cases you will find a tool to do the job, but whether the tool is mature enough to invest in. "Test automation is a major undertaking that has high costs in terms of upskilling your test team and creating test scripts, etc., and people need to know that the tool is a good investment, whether or not there is a license fee to pay. People will be asking if the tool is going to be around in five years' time, what levels of support are available, how good is the feature innovation, bug fixing, release schedules, and so on." Aberdour thinks there is a big gap in selection and evaluation support and paid technical support services, which is what he wants to focus on next with OST.

Of course there are strong and sustainable communities around the more popular tools that provide excellent support, but Aberdour says the market isn't at the point yet where there is a lot of commercial support available. Yet there are some examples of support companies evolving around open source products, and a lot of the product teams will hire themselves out. "In a commercial sense this market is still quite young, but it will mature, and when the product integration, community maturity, and commercial support are right, it has the potential to be highly disruptive. Gartner reckons the ASQ market is worth $2 billion a year, and while Hewlett-Packard and Mercury absolutely dominate, there are hundreds of smaller proprietary vendors at real risk from open source disruption. Things are certainly moving in the right direction."

Open source testing tools offer great performance and are a bargain compared to proprietary testing tools. The lack of a formal and consistent support structure might work against some tools being used to test mission-critical apps. But if you are an open source developer or with a software development company pondering over your testing budget, spend some time checking out the testing tools Web sites and forum boards. You might save yourself some serious bucks.

Howto setup Database Server With postgresql and pgadmin3

Posted by admin on April 14th, 2008

PostgreSQL is a powerful, open source relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL92 and SQL99 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others.

pgAdmin III is the most popular and feature rich Open Source administration and development platform for PostgreSQL, the most advanced Open Source database in the world. The application may be used on Linux, FreeBSD, OpenSUSE, Solaris, Mac OSX and Windows platforms to manage PostgreSQL 7.3 and above running on any platform, as well as commercial and derived versions of PostgreSQL such as EnterpriseDB, Mammoth PostgreSQL, Bizgres and Greenplum database.

pgAdmin III is designed to answer the needs of all users, from writing simple SQL queries to developing complex databases. The graphical interface supports all PostgreSQL features and makes administration easy. The application also includes a syntax highlighting SQL editor, a server-side code editor, an SQL/batch/shell job scheduling agent, support for the Slony-I replication engine and much more. Server connection may be made using TCP/IP or Unix Domain Sockets (on *nix platforms), and may be SSL encrypted for security. No additional drivers are required to communicate with the database server.

Install Postgresql and pgadmin3 in Ubuntu

PostgreSQL 8.2 version will be installed in Ubuntu 7.10 (Gutsy Gibbon)

sudo apt-get install postgresql-8.2 postgresql-client-8.2 postgresql-contrib-8.2

sudo apt-get install pgadmin3

This will install the database server/client, some extra utility scripts and the pgAdmin GUI application for working with the database.

Configuring postgresql in Ubuntu

Now we need to reset the password for the ‘postgres’ admin account for the server

sudo su postgres -c psql template1
template1=# ALTER USER postgres WITH PASSWORD ‘password’;
template1=# \q

That alters the password for within the database, now we need to do the same for the unix user ‘postgres’:

sudo passwd -d postgres

sudo su postgres -c passwd

Now enter the same password that you used previously.

from here on in we can use both pgAdmin and command-line access (as the postgres user) to run the database server. But before you jump into pgAdmin we should set-up the PostgreSQL admin pack that enables better logging and monitoring within pgAdmin. Run the following at the command-line

we need to open up the server so that we can access and use it remotely - unless you only want to access the database on the local machine. To do this, first, we need to edit the postgresql.conf file:

sudo gedit /etc/postgresql/8.2/main/postgresql.conf

Now, to edit a couple of lines in the ‘Connections and Authentication’ section

Change the line

#listen_addresses = ‘localhost’

to

listen_addresses = ‘*’

and also change the line

#password_encryption = on

to

password_encryption = on

Then save the file and close gedit.

Now for the final step, we must define who can access the server. This is all done using the pg_hba.conf file.

sudo gedit /etc/postgresql/8.2/main/pg_hba.conf

Comment out, or delete the current contents of the file, then add this text to the bottom of the file

DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database
# super user can access the database using some other method.
# Noninteractive
# access to all databases is required during automatic maintenance
# (autovacuum, daily cronjob, replication, and similar tasks).
#
# Database administrative login by UNIX sockets
local all postgres ident sameuser
# TYPE DATABASE USER CIDR-ADDRESS METHOD

# “local” is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5

# Connections for all PCs on the subnet
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all [ip address] [subnet mask] md5

and in the last line, add in your subnet mask (i.e. 255.255.255.0) and the IP address of the machine that you would like to access your server (i.e. 138.250.192.115). However, if you would like to enable access to a range of IP addresses, just substitute the last number for a zero and all machines within that range will be allowed access (i.e. 138.250.192.0 would allow all machines with an IP address 138.250.192.x to use the database server).

That’s it, now all you have to do is restart the server

sudo /etc/init.d/postgresql-8.2 restart

That’s it you can start using postgresql in Ubuntu

Create a Database from command line

You can also use pgadmin3 for all postgresql related

To create a database with a user that have full rights on the database, use the following command

sudo -u postgres createuser -D -A -P mynewuser

sudo -u postgres createdb -O mynewuser mydatabase