Pages

Saturday, September 10, 2011

BlackBerry Bold 9900 Variations In The Mobile Phones

Info PR: 4 I: 805,000 L: 666 LD: 203,501 I: 223,000 Rank: 4155 Age: August 28, 2008 I: 0 whois source Robo: yes Sitemap: no Rank: 35742 Price: 6319 Links: 47(1)|5(2) Density

The BlackBerry Bold 9900 is the latest product from BlackBerry to hit the stores, and already it is creating a new revolution within the smartphone maker. BlackBerry is often perceived by many to be focused only on business customers, while not paying enough attention to the scenario in the segment as a whole. Nowadays, customers are demanding entertainment features whether it be business customers or non-business customers. BlackBerry has clearly failed to recognise this change within the industry, which has meant that there is a gap with a host of their rivals. The BlackBerry Bold 9900 will be one of the first phones that will attempt to bridge this gap by providing quite a lot of features.

However, it is mainly not because of the phone that the BlackBerry Bold 9900 will be one of the finest all-rounders launched by BlackBerry till date, but it is mainly because of the operating system. The phone becomes one of the first BlackBerry devices to utilise the BlackBerry OS 7, which comes with a number of improvements in various areas of the OS. The most important of them has to be the Webkit browser, which has been totally revamped in order to provide a much better user experience. BlackBerry claims that this browser alone is almost 50 percentage faster than the browser presented the previous BlackBerry OS 6. In addition to this, the BlackBerry Bold 9900 will also be providing a new interface that is much more intuitive to use unlike previously where the operating system was not especially made for the touch screen display.

You may be wondering whether the operating system that suits the touch screen display is probably the right choice for the BlackBerry Bold 9900, which is one of the quintessential business phones that you can get today. However, the simple matter of fact is that the BlackBerry Bold 9900 is one of the business phones that provides a touchscreen display along with every other business options. Of course, the fact that providing entertainment is clearly not the priority for the BlackBerry Bold 9900 is evident from the size of the display - 2.8 inches. However, this display has the capability to provide up to 640 x 480 pixels in terms of resolution, which is almost comparable to the WVGA resolution of the most devices that offer a very large display unit.

Coming back to the operating system, this operating system is clearly one of the reasons for making the BlackBerry Bold 9900 be more skewed towards offering entertainment features. However, they cannot be done overnight or without a powerful processor. Recognising this fact, BlackBerry has provided the 1.2 GHz single core processor for the BlackBerry Bold 9900, which is extremely fast. It is so fast that this is the same processor that BlackBerry will be using for their two other premium phones launched recently. The BlackBerry Bold 9900 will be available on contracts from various manufacturers for around 42 per month in the United Kingdom.

Buy LG Optimus Black And LG Optimus 2x in Chennai

LG mobile is a leading name among the mobile companies in the country and it has owned its own distinct name in the market. LG mobiles are best known for its high battery backup and durability in all its handsets products. The company has produced a lot many attractive designs and models of mobiles and recently it has put other mobile companies in big astonishment by launching its much awaited mobile. LG Optimus Black comes with Android OS and the best part of this hand set is its easily upgradable to v2.3 and powered by TI OMAP processor, this phone supports both 2G and 3G Networks easily. LG Optimus Black has 4.0 inches good looking LCD capacitive touch screen which has got 480 x 800 pixels and MP3 Player as well, Live Chat and video recording makes it really distinct, its other best features are Accelerometer sensor for UI auto-rotate. The qualities which make it really cool and different from other similar phones is its features like opening Google Search, YouTube, Google Talk, Google Maps, Gmail, Document viewer and Digital Compass. LG mobiles are getting its most of the popularity by its rare products in market. LG Optimus black price in Chennai is somehow similar to other cities and it seems quite reasonable due to its uniqueness along with sharp features.

LG Optimus black price in Chennai is Rs. 18,850/- approx which seems bit expensive, this phone belongs to the elite class and also has got features that suits their profile and LG mobile is now getting much popular with the launch of this master piece by them. LG mobiles really look different by its different shapes and designs. LG Optimus 2X phone is the first commercial phone with a 1GHz dual-core NVIDIA Tegra 2 chipset, which makes this phone very special. This phone is supposed to be the best Android smart phone till date in the market. The build of this phone looks solid with its 4.9 ounces weight and 4 inches wide screen with 480x800 pixels resolution. On the back of the LG Optimus 2X phone, there is 8 mega pixels camera with LED flash. It has a front camera also for video chatting but without LED notification. This handset has enough quality materials like the soft touch plastic and the metal strip on the back etched with the Google branding which prevents it to look plain.

The LG Optimus 2X price in Chennai starts from Rs. 24,550/- only which really makes it worth and valuable phone.

If you are interested in buying any of these LG mobiles then you can visit khojle.in, which allows you to post free ads for buying and selling products without any investment.

The Brand New Samsung Galaxy R Is A Highly Functional Handset

In this article I will take a closer look at some of the key features of the new Samsung Galaxy R. This is one of a number of new additions to the Samsung Galaxy range of smartphones, and offers a great spec list as well as appealing styling.

One of the most important aspects of a smartphone, the Samsung Galaxy R boasts a large 4.2 inch SC-LCD capacitive touchscreen which has a high resolution of 480x 800. This resolution displays at a pixel density of 222ppi, so the display quality is very good making photos, videos and games look great on the screen. Accelerometer and proximity sensors are included as standard, and the touchscreen itself is very responsive thanks to the powerful 1 GHz processor under the hood.

Ample storage is provided with the Samsung Galaxy R, with 8 GB included as standard. This means that a large number of files such as music tracks, videos, photos and downloaded applications can be stored on the phone, available at the users' fingertips. In order to gain further storage, one can simply utilise the integrated microSD slot by installing a memory card of up to 32 GB, greatly increasing the potential available storage.

Web Browsing

Just like the rest of the Samsung Galaxy series, the Samsung Galaxy R perfectly equipped to browse the web on the go. With an HSDPA connection providing fast data download speeds in areas with 3G coverage and Wi-Fi connectivity, users always have the most appropriate connection engaged based on the coverage at their location. The full HTML browser ensures that no websites are out of bounds thanks to Adobe Flash support which enables embedded content such as videos and games to be displayed just as they would on a desktop or laptop computer.

Operating System

The Samsung Galaxy R utilises the latest version of Android (v2.3 aka Gingerbread) to provide a customisable and versatile operating system. The intuitive TouchWiz UI (Samsung's custom Android UI) is the user interface of choice, allowing users to customise multiple homescreens with apps and widgets. Access to the Android Market allows users to download any imaginable app straight to the phone, further increasing the functionality of the phone.

A smartphone would not be a smartphone if it did not come with a built in digital camera. The Samsung Galaxy R has a 5 megapixel unit which is capable of capturing photos with a resolution of 2592x 1944 resulting in great quality images. The camera comes with geo-tagging, smile detection, autofocus and LED flash to name but a few of its features. The camera can of course also shoot video footage and this is in 720p quality. There is a secondary camera located on the front of the phone which allows users to take self portrait photos and carry out video calls with others who have a compatible smartphone.

I have only outlined sme of the key features of the handset in this article, but as you can see, it offers a great level of technology, and looks set to prove popular thanks to this combined with its stylish aesthetics, and easy to use interface. The Samsung Galaxy R is due for release shortly, and it is likely that a number of UK manufacturers will offer the handset free of charge with some of its contracts.

Android this week: $29 smartphones; Droid Bionic arrives; Netflix updated

Android this week: $29 smartphones; Droid Bionic arrives; Netflix updated

By Kevin C. Tofel Sep. 10, 2011, 6:00am PT 1 Comment

Tweet

This week saw both a $29 and a $299 Android phone launch, showing the extreme range of handsets that run Google’s mobile platform; something that’s helping it grow smartphone market share. Although there’s a wide variance in the price between the two, both are capable of connecting consumers to fast mobile broadband networks, a large app store, and social networks. The difference is in the experience.

Huawei’s Impulse 4G for AT&T is aimed at current feature phone users looking to step up to the smartphone world. At $29 with contract, you wouldn’t expect much in the way of hardware, but the device has some hardware features that were standard on more expensive phones last year: A 3.8-inch 800×480 touchscreen, 5 megapixel camera with HD recording capability, and GPS to name a few. The 800 MHz processor won’t set any speed records, but should be good enough for most tasks on a first-time smartphone.

At the other end of the spectrum is Verizon’s new Droid Bionic, made by Motorola. For $299 with contract, the handset is generally considered to be cutting-edge; at least for a few months, given the fast paced mobile technology cycle, particularly with Android phones. The phone’s 4.3-inch display uses a 960×540 resolution display, is powered by a 1 GHz dual-core processor with a full gigabyte of memory, has 32 GB of storage out of the box and connects to Verizon’s LTE network.

I’ve only spent a short time using a review unit of the Bionic, so I can only share some initial impressions for now. Overall, the phone is fast and responsive. The camera may be the best yet in Motorola smartphone. And so far, the Bionic handles network transitions reasonably well: it has switched between 3G and 4G networks (due to coverage) faster than other LTE devices I’ve used in the past.

The Bionic has a number of docking accessories, including the one that looks like a laptop but is powered by the phone, offering expansion options. I’ll have more thoughts in a detailed review soon, but for now, I’m generally impressed with the Bionic.

I’m also impressed with Netflix for Android, but the problem for many has been one of device support. When Netflix finally launched in May of this year for Android phones, it was only available for a half-dozen handsets. At that time, the company said it would have to test the software on each individual phone model.

That seems to have changed this past week as Netflix updated the application, saying it can now run on any Android 2.2 or 2.3 device. At last check, 81.9 percent of Android devices hitting the Android Market ran those two versions, meaning around 4 of every 5 current Android phones and small tablets can enjoy Netflix on the go.

The Superb Samsung Galaxy Xcover Is Waterproof To 1 Metre Yet Packed With Features

The popular Samsung Galaxy smartphone series has just seen a number of new additions. One of these is the Samsung Galaxy Xcover. This handset sees the levels of smartphone technology you would expect from the Galaxy series, but with a twist.

With IP67 certification, it joins the ranks of handsets such as the Motorola Defy which is aimed at those who need their phone to have a certain degree of ruggedness and protection from the elements. IP67 certification is only given to products which meet the following criteria:

-Total dust ingress protection

-Water immersion between 15cm and 1m

Therefore the Samsung Galaxy Xcover is ideal for anyone who works outside, or lives a lifestyle where they are often outdoors meaning they need a phone which can handle adverse weather conditions.

The beauty of this handset is that it does not sacrifice its functionality as a smartphone in order to achieve its weather proof credentials. So what technology is included with this handset?

A 3.65 inch capacitive touchscreen which boasts a resolution of 320x 480 means the handset is great for multimedia use, be it viewing photos, videos, playing games or browsing the web. Additionally, the screen itself is manufactured from Gorilla glass, which prevents cracks and chips should the phone be accidentally dropped. The touchscreen is also a very responsive means of navigating the TouchWiz UI thanks to an 800 MHz processor, a feature which also bring benefits to other software and hardware aspects of the phone.

The Samsung Galaxy Xcover is also a good camera phone. Although it may lack the resolution of its counterparts and many competing handsets, it is still capable of taking good quality still images thanks to the 3.2 megapixel unit, which is also capable of video capture, and as an added bonus an LED flash is included.

The latest version of the Android OS is included, so users have the versatility to customise their phone, and download applications from the Android Market, greatly increasing the functionally of the phone. The handset also comes with 150 MB of storage, but thanks to an integrated microSD slot, a memory card of up to 32 GB can be installed, meaning users have plenty of scope for storage of music tracks, videos, applications, games and documents etc.

The Samsung Galaxy Xcover may not have the widespread appeal of the rest of the Galaxy range, but thanks to its dustproof and waterproof qualities, it is sure to prove popular among those who seek those qualities in a mobile phone.

Why computing isn’t going away, just hiding in the clouds

Why computing isn’t going away, just hiding in the clouds

By Richard L. Schwartz Sep. 10, 2011, 12:00pm PT No Comments

Tweet

Lately, we’ve read a lot about the end of the computer age. In reality, it isn’t over. The computer is simply hiding in the clouds behind commonplace devices. As it should — and should have long ago, had we been able to figure how sooner.

For the last couple decades, we’ve studiously siloed computing devices (desktops then laptops) from communication devices (phones) from media devices (iPods then e-book readers).

Then the bright lines between products, content and connectivity blurred as computing became increasingly distributed, physically and programmatically. This blurring has challenged enabling infrastructure to handle new functionality that straddles previously distinct silos.

Apple’s iPhone and Amazon’s Kindle 3G have highlighted — and accelerated — this trend. Both vertically integrate the device and its connected services behind a single consumer brand. Amazon went a step beyond Apple by building connectivity directly into the device by using wholesale agreements with carriers in their own “Whispernet” platform.

Both Apple and Amazon were rewarded with the lion’s share of industry profits by owning the choke point for after-point-of-sale services.

Now, the writing is on the wall for other players: Embrace scalable cloud economics or remain stuck in a low-margin hardware business. Cloud services can be averaged over multiple product lines, and the marginal cost of adding features in the cloud is negligible when compared to that of hardware. Device makers who realize and exploit this will profit.

Let’s assume you make or are a VAR for a tablet, laptop or other CE device. Here’s my prescription to get there:

Think of the device as a receptacle to a branded cloud-service portal that you control.

Build in the cloud extensions as core a feature set that differentiates your hardware.

Make connectivity transparent to ensure an always active connection to the cloud.

Build or partner to deliver the above as a single, branded user experience.

Let’s explore those steps in pieces.

Even in the best case scenarios, tablets and laptops are differentiated by increasingly fewer hardware components assembled by the same mega-ODMs. But think instead of storage memory as a cloud service. Imagine a tablet or laptop that comes with 1 terabyte of storage in the cloud. Or think of a media tablet “tuned” to Comcast or Dish Network channels, or business laptops that embed remote IT troubleshooting or Find My PC as standard feature sets.

What about connection to the cloud features? Do you build in only the basic Wi-Fi chipset and hope the device is connected often enough to your cloud features? Or do you bite the bullet and add a 3G or 4G modem to fill the gap between hotspots? How much business do you lose when your cloud can’t be reached?

Our primary research (at Macheen) shows that 72 percent of laptops (and that number is much higher for tablets and e-readers) goes outside Wi-Fi range each month; more than half that group go off Wi-Fi five or more days a month. For core cloud service functionality, this means it can be important to enable access outside Wi-Fi range. And while it isn’t always obvious, there is an incremental hardware cost for every unit to enable even a single device actually sold — to accommodate antenna design, for example. Reselling traditional carrier data plans may not deliver the goods. Industry adoption rates for enabled units can be as low as 1 percent, so even if you ship as an option, user take rates to turn it on may not move the needle to enable cloud features. Yet by changing assumptions, we at Macheen are seeing adoption rates as high as 70 percent for enabled and active units.

Transparent connectivity (think 3G Kindle) is the key. At a minimum, find a way for all units to have an always-active connection to your cloud services — right out of the box. Transparent connectivity doesn’t mean free connectivity. As with the Kindle example, you should cover access costs in fees for content, applications and services. And as for freewheeling web access, treat that as a premium service and charge for it. Think of this as flipping the iPad data services model inside out: Justify constant connectivity through the apps and use open access as an upsell opportunity.

So who puts all this together? Different companies will answer that differently. But what seems clear is that it needs to be a single experience for the user, one that works from the point at which it is turned on and is as simple to operate as an on/off switch.

I do not believe that the computer industry will collapse down to one or two players. But I do believe that the Apple and Amazon lessons are real and require other CE players to borrow the lessons and remake themselves.

Computers aren’t going away, they’re just hiding in the clouds. Just as they should be.

Richard L. Schwartz is the President and CEO of Macheen Inc. 

Image courtesy of Flickr user Benson Kua.

VMware tackles task management with Strides

VMware tackles task management with Strides

By Derrick Harris Sep. 9, 2011, 9:06am PT No Comments

Tweet

The Socialcast team at VMware has introduced a beta version of a new product called Strides, which aims to make task management an interactive experience. Think Basecamp, but more social, and with more visibility into what your colleagues are working on. Strides, of course, is just the latest venture of a new, more application-centric VMware.

Strides lets people create tasks, invite participants, share files, monitor progress and all the other things one might expect from a project-management service, but it also takes it a step further. Users can see who else is online and working with a particular project at any given time, can exchange comments in real time and can filter by a variety of different characteristics.

If we’re going to get people away from writing Post-It notes and holding unnecessary meetings, Socialcast Founder and now VMware VP of Social Enterprise Timothy Young told me, “we need to make the interface really fast and really fluid.”

Young thinks Strides will particularly useful for large companies, inside of which it can be difficult to keep track of everything that’s going on. Department heads and managers, even C-level executives if they’re so inclined, will be able to get a broad view of what what their teams are working on and who’s doing what. And, of course, they can manage the tasks and interact with their employees, too.

Although Strides and Socialcast are separate products, Young said the two ultimately will be integrated similar to how the various Google applications are. Users will be able to open Strides from a link on their Socialcast pages, profile information will be shared between the two applications and, perhaps most importantly for users, Strides updates will appear in their Socialcast streams.

Young views Strides as a move up the stack from Socialcast, which makes it an even higher step up for VMware, which just recently got into the application space. During VMworld, VMware CEO Paul Maritz talked a lot about applications replacing infrastructure in terms of strategic importance, and this approach has garnered many a comparison with Maritz’s previous employer, Microsoft.

Young thinks that’s fair comparison, but noted that we’re talking about a different technology stack than in the previous generation, and users are part of the equation now, too. VMware is trying to bring the efficiencies it brought to computing resources to human resources, Young explained, which means that applications have to help IT shift from just managing devices to also managing people.

How Android app growth will mean payoff for devs

How Android app growth will mean payoff for devs

By Ryan Kim Sep. 9, 2011, 9:21am PT 5 Comments

Tweet

Android app downloads are expected to overtake iPhone apps this year for the first time, according to research firm Ovum, and will nearly double iPhone downloads by 2016. The fast growth comes from Android’s overall sales momentum and consumers’ growing appetite for apps, Ovum said. It would also appear Android is benefiting from third-party app stores like GetJar and Amazon Appstore, which can also move apps.

Ovum said the iPhone saw 2.7 billion downloads last year, almost double that of Android at 1.4 billion. But by the end of this year, Android is expected to have 8.1 billion downloads compared to 6.1 billion for Apple’s smartphone. And by 2016, Android is forecast to have 21.8 billion app downloads versus 11.6 billion for the iPhone.

Apple, however, will still generate more revenue for paid apps, said Ovum, who predicted that iPhone paid download revenue will hit $2.86 billion in 2016, compared to $1.5 billion for Android. It’s also important to note that Ovum is just comparing phone app downloads, while Apple has a huge lead in tablet apps and downloads.

But if the Ovum numbers are correct, it raises some interesting implications for developers. Yes, Apple will still generate more money through paid downloads, both iPhone and iPad. But as we’re seeing increasingly, freemium apps with in-app purchase are generating more money now than paid downloads. Users will still be attracted to high-quality paid apps, and they’re likely to find them most reliably on iOS for some time. But the fact that Android is poised to drive more downloads is significant, especially for free apps looking to obtain revenue from in-app purchases and advertising. If Android can deliver the eyeballs, it could begin to shift developer momentum toward the platform. But people have to be making money on free Android apps and we’re seeing more signs of that, not just Angry Birds.

I wrote last month about Tap Fish, a game from Gameview Studios, which said the Android and iOS versions of the free app produce comparable average revenue per user  on most days and on some days, the ARPU on Android is 30 percent higher on Android than iOS. That’s due to the fact that Android doesn’t ban incentivized install app distribution campaigns like Apple does and also provides some development flexibility that helps highlight the app.

Outblaze, a Hong Kong developer told Inside Mobile Apps this week that average revenue per user for its Android apps is 30 percent higher than on iOS. That’s partly due to less competition on Android, ensuring mass compatibility for the apps on Android devices and aggressive updating. Both Gameview and Outblaze said that complaints about not monetizing well on Android are outdated, at least for them.

If you look at the larger picture then, developers on Android have a lot to look forward to. The monetization picture is getting better and I would argue that the apps on Android are also improving. And if downloads of Android apps now soar, it means they can benefit from a lot of scale. The gap in revenue between iOS and Android apps is starting to close and that should accelerate as app downloads on Android go up.

Now there are a lot of warnings to consider too. As I wrote about, Android users appear to download a more narrow selection of the top apps. So more downloads won’t be of benefit for many developers if it’s not more evenly distributed. That’s where better discovery tools and increased quality across the board will help, especially as Android Market gets more crowded.

Also, Android must stay on top of piracy. The Yankee Group published findings from a developer survey of 75 developers conducted by Skyhook Wireless, which found that 27 percent see piracy as a huge problem and about third of developers say piracy has cost them more than $10,000 in revenue. That’s more of a concern for paid apps, but it’s something that raises concerns about the quality of Android apps in general.

But I think over time, we’ll see that developers are going to have more options as Android’s app momentum grows. Apple’s App Store will still be the place for developers who want to sell paid apps, but when it comes to more freemium apps developers now have more reason to build up their Android business. And perhaps over time, that will be where freemium app makers focus to make their money. Not everyone will see results like Gameview or Outblaze right away, but if downloads soar as Ovum predicts, there’s a better shot to finally make some money on Android.

Google’s Zagat buy could give search critics more ammo

Google’s Zagat buy could give search critics more ammo

By Mathew Ingram Sep. 9, 2011, 10:03am PT 7 Comments

Tweet

Google’s acquisition of Zagat, which was announced on the Google blog on Thursday, is a relatively small one for the search-engine giant: less than $66 million, according to some estimates. But the ripples created by the deal could be far larger than its monetary value, because the review-based service is a content producer, which takes Google into an area it has mostly avoided in the past. That the web behemoth plans to make Zagat a central part of its local recommendations could give “search neutrality” advocates — and possibly even the Federal Trade Commission — more ammunition in their battle with the Google empire.

As Stacey described in her post on the news, Google is clearly interested in the “people power” that Zagat brings to its local recommendations — something the search company tried to get by acquiring Yelp in 2009, only to be turned away (as it was when it attempted to buy Groupon for a rumored $6 billion earlier this year). Although it’s often confused for a paid-review service, Zagat actually pioneered a pre-web version of the “crowdsourcing” approach that Yelp and others have taken to local reviews. But the review service has reportedly had trouble making the leap into the web era.

Is Google plus Zagat a Yelp-killer?

Much of the coverage of the news has portrayed the Zagat purchase as Google’s way of “killing” Yelp — and that’s the kind of language likely to raise warning flags both at the FTC and for many of Google’s other critics. Even some Google fans–  such as journalism professor Jeff Jarvis, author of the book “What Would Google Do?” — have raised concerns about what a Zagat purchase means for the company. Jarvis said:

Google buying Zagat as a content company would concern me. It would bring channel conflict and put Google in the content-creation instead of the content-linking business, competing with the other side of links and raising conflict-of-interest questions.

To complicate things, Google has even more of a history with Yelp than just trying to acquire the company. For a time, the search company was aggregating reviews from Yelp and other services and showing them on its Google Place pages for a specific location. Yelp CEO Jeremy Stoppelman cried foul over this behavior, however, saying the search company was stealing traffic from his service, and Google recently stopped doing so. (Stoppelman also said Google effectively told the company if it didn’t like its reviews being aggregated, it could remove itself from Google’s search index altogether.)

Buying Zagat is obviously an attempt to replace that kind of content, but it could cause even more problems for the company if Google starts giving preferential treatment to Zagat results. That’s because a key part of the criticism the company gets — criticism that has led to antitrust inquiries and/or investigations not just in the United States, where the FTC notified Google earlier this year that it has started an inquiry, but in Europe as well — is that it distorts search results to send traffic to its own in-house properties.

More fodder for the “search neutrality” crowd

This kind of criticism even has its own buzzword: “search neutrality,” which tries to draw a direct link between the idea of net neutrality — which is supposed to prevent large telecom and networking companies from giving their own content preferential treatment — and the idea that Google should provide unbiased search results rather than favoring its own assets. Critics such as FairSearch.org, whose members include Microsoft and web services like Expedia  and Travelocity, say the company causes harm to other companies and to users by giving its own services prominence in search results. They have said that:

Google’s anticompetitive practices include scraping and using other companies’ content without their permission, deceptive display of search results, manipulation of search results to favor Google’s products, and the acquisition of competitive threats to Google’s dominance.

Zagat isn’t Google’s only move into the area of content — the company has long owned Blogger, a blog-publishing platform that competes with WordPress (see disclosure below), and YouTube is also a giant user-generated content provider. But so far, no one has really tried to argue that Google is depriving users of access to competing sources of blog content or funny videos. Zagat’s content, however, is directly related to what many see as a hot and competitive market: namely, local recommendations and reviews.

In some ways, Google can’t win: When it tried to aggregate Yelp reviews as a way of adding local content to its Google Place pages, the service complained that it was stealing traffic. Now adding its own reviews from Zagat is likely to draw critics who say it is competing with Yelp and others unfairly by giving prominence to reviews from its own service. And if Google makes further inroads into content-related businesses, as some expect that it will, those kinds of criticisms are only going to intensify.

Google’s argument, outlined by Google Fellow Amit Singhal in a blog post following the announcement of the FTC inquiry, is that Google simply tries to provide the best search results it can, and that users are free to go elsewhere if they wish. But it remains to be seen how well that kind of Pollyanna-ish approach goes over with the feds.

Disclosure: Automattic, the maker of WordPress.com, is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

Post and thumbnail photos courtesy of Flickr users Mark Strozier and bloomsberries

The paywall debate: monetizing news in the digital era

The paywall debate: monetizing news in the digital era

By Paul Sweeting Sep. 9, 2011, 11:00am PT No Comments

Tweet

The transition from print to web-based publishing has been rocky for many traditional newspaper and magazine publishers. While online readership has soared, online advertising revenue is a fraction of what print ads once brought in. Print remains profitable, but hard-copy circulation has continued to shrink.

Those scary trend lines have forced publishers to rethink their approach to doing business online, and that includes erecting subscription paywalls. But as I discuss in my latest report for GigaOM Pro (subscription required), simply trying to replicate the old print subscription model online by asking readers to pay a single price for unlimited access may not be the optimal approach to monetizing content online. Rather, publishers need more flexible paywalls that support multiple payment options and multiple content configurations.

In the print world, audiences tend to be demographically and behaviorally homogeneous, so it makes sense to treat all readers as being roughly equal when it comes to their economic value to the publisher. Research cited in the report, however, shows that online news audiences are highly differentiated: Readers exhibit widely varying behavior and widely varied levels of engagement with the publisher’s content.

The vast majority of the unique visitors to most online news sites, for instance, is composed of search-directed “flybys,” who typically have little investment in the publisher’s brand and generate only a fraction of total page views. Most page views tend to be generated by a small slice of highly dedicated readers.

Share of total monthly unique visitors by type of visitor, U.S.

Source: Scout Analytics

Share of total monthly page views by type of visitors, U.S.

Source: Scout Analytics

Those differences have significant economic implications for publishers. Extensive consumer research on related online content industries — music, video, gaming — cited in the report shows that engagement level correlates closely with willingness to pay for content and with the respective levels of price sensitivity among the different groups of users.

Willingness to pay is not a fixed quantity, however. Consumer research and the experiences of related industries show that users who may not be willing to pay full price for an all-access subscription may still be willing to pay for a select bundle of content if packaged and priced appropriately. Racing enthusiast publication Autosport, for instance, offers its subscription content on an  la carte basis online for $1 per article. The pay-per-use offer has led to a 75–80 percent increase in first-time paid users.

Even among those willing to pay, moreover, online content consumers can be highly sensitive to the purchase experience, suggesting publishers could have greater success at converting readers into paying customers by paying closer attention to the online checkout process.

Some of the techniques publishers are finding most effective for converting online content users into buyers, in fact, more closely resemble traditional retail merchandising and e-commerce than they do subscription sales.

By carefully researching the interests and behavior of its online readers, for instance, the Dallas Morning News was able to create a new paid-content revenue stream by packaging all of its high-school sports content into a stand-alone product and offering it separately from subscriptions to the rest of the site.

Finally, not all paywalls need to be consumer-facing. Putting content behind a paywall is a critical first step toward fostering robust business-to-business commerce between content originators and downstream aggregators that would allow all to share fairly in the online news ecosystem. While the platforms and tools to support that market-making function do not yet exist, there is a significant long-term opportunity for entrepreneurs to begin developing them.

For the full in-depth analysis and strategies for monetizing online content, please see the full report at GigaOM Pro (subscription required).

GigaOM Pro will host a webinar, “Building a better paywall,” on Wednesday, Sept. 14. All attendees will get a complimentary copy of the report. 

Image courtesy of flickr user Matt Callow

Now that publishers get it, comics and the iPad are a perfect pair

Now that publishers get it, comics and the iPad are a perfect pair

By Darrell Etherington Sep. 9, 2011, 11:54am PT 5 Comments

Tweet

The DC storefront in ComiXology's Comics app for iPad.

DC Comics recently began selling digital versions of its new comics on the same day that print editions show up in local comic book shops. It’s a move that coincides with, and helps promote their “New 52,” a complete re-launch of all of their monthly titles beginning at issue #1, but its effects are already being felt beyond even just its own stable of properties.

Beating out books

Last week, three of the top five most popular iPad apps in the Books category were comics apps. Comics by ComiXology came in second overall to Al Gore’s Our Choice, followed by the official Marvel and DC apps at numbers three and four respectively. Note that ComiXology’s tech powers both the Marvel and DC apps, and its own app offers titles from both publishers, as well as a number of others including IDW and Image comics.

But the successes for digital comics on the iPad don’t end there. Comics by ComiXology also became the second highest grossing iPad app — all categories included — as of Wednesday night this week (it’s still at number three at the time of writing). ComiXology CEO David Steinberger directly attributes part of the reason behind his app’s good fortune to DC’s decision to go digital:

[W]hen you look at our success as one of the top grossing iPad Apps and then take in the news about DC Comics going back to print on almost all of The New 52 comics, you can see that digital distribution is growing both the print and digital comic book market. DC’s daring move going digital the same day as print with all their print publications this month in combination with the great buying and reading experience our App provides is a tide that lifts all boats.

A page from Grant Morrison's Action Comics #1, part of DC's "New 52."

It sure does look like the changed approach to digital comics is lifting all boats. Consider that Marvel beat DC itself in terms of gross last week and continues to ride high at number 18 overall among iPad apps, despite having yet to offer its own blanket approach to same-day digital. It is headed in the right direction, however, as it announced in July at San Diego Comic-Con 2011 that it would be rolling out simultaneous digital and print releases gradually for all X-Men and Spider-Man titles. Image is likewise testing same-day releases through ComiXology with at least a few titles.

Same-day digital definitely delivers

Same-day digital appears to be the magic ingredient that was missing from the iPad comic mix. If you’ve ever tried reading comics on the iPad, you know that it’s an almost-perfect delivery method. The screen size and resolution are both good enough that you should be able to view full single-page layouts without zooming or squinting, you can carry an entire library on a single device, and you don’t have to worry about dog-earing the corners. Seriously, though, for all but devoted collectors, it’s a nearly ideal alternative to paper, and digital editions are also often cheaper.

One drawback to digital is load times, but you can view content as it downloads, one-page at a time.

Regardless of what you think of DC’s decision to relaunch its entire line (I’m not that impressed by what I’ve seen so far, with the exception of Grant Morrison’s Action Comics #1), you can’t deny that the bold decision to make everything available digitally on the same day will help push the industry forward. Marvel will likely have to speed up its rollout of day and date iPad releases now that DC is seeing big dividends in terms of gross revenue, and partners like ComiXology will be pushing them to follow suit, because, again, doing so would “lift all boats.”

One more key ingredient missing

There’s still one piece missing for digital delivery, however, and I wrote about it back in May: subscriptions. Comics should be leaving behind the idea that they can still become valuable collector’s items, and same-day digital is one step towards that. But making titles available digitally on a subscription basis is the next, and possible the more important step.

If I could buy an Uncanny X-Men auto-renewing subscription through my iPad, I’d probably never turn it off, whereas I’ll only pick up a print or digital copy maybe three or four times a year these days, at best. An automatic digital pull list of an iPad owner’s favorite titles is bound to be a hit with fans who enjoy comics for the sake of their content, and not for what they might be worth on eBay in 20 years time (hint: it’s nothing).

DC is already talking about introducing value-add elements to digital comics, sort of like DVD special features, but if Netflix has taught us anything, it’s that the only special feature most media consumers care about is reasonable subscription-pricing that delivers just the good stuff. An all you can eat publisher-wide subscription for digital comics would be amazing, but I’d settle for per-issue monthly or yearly pricing. Let’s hope comics don’t take as long to warm up to that idea as they did to same-day digital.

Why Intel put $24M behind cloud, big data

Why Intel put $24M behind cloud, big data

By Derrick Harris Sep. 9, 2011, 1:00pm PT No Comments

Tweet

Intel Capital announced $24 million in new investments yesterday, and cloud computing and big data companies were the big beneficiaries. These aren’t the first such investments Intel has made in the space, but they do underscore Intel’s understanding that it has to prop up software partners to keep Intel dominant as computing evolves.

Last week, Stacey Higginbotham wrote about a supercomputing-industry microcosm of the situation that Intel and other chipmakers are facing. The long and short situation, she explained, is that multicore processors are still relatively new, but always evolving, and more and more of them are being packed into supercomputers. Some experts suggest we’ll never reach the exascale plateau if software innovation doesn’t catch up with the intricacies involved in running jobs across so very many cores.

But the problems don’t stop at the world’s fastest systems. As companies try to transition their legacy data centers into cloud computing environments, or start rolling out distributed systems to run big data workloads, they run into their own problems. It’s not always easy to write applications designed to run on such systems, and maximizing the efficiency of today’s powerful processors is also difficult. Having a data center or cluster full of multicore processors isn’t too helpful if applications aren’t taking advantage of them, or if they’re too underutilized to justify the expense of buying or running them.

If the data center is the new box, as Joyent’s Jason Hoffman puts it, companies are going to have to figure out how to run whole data centers without getting bogged down at the processor level. In 2009, Intel invested $8.5 million in Joyent, which has since complemented its public cloud business with a software business selling the company’s SmartDataCenter private-cloud product

This time around, Intel reportedly invested $5 million in private-cloud software startup DynamicOps. That company, which spawned from investment bank Credit Suisse, sells software that helps companies automate their data centers and design cloud services. Intel also invested in predictive analytics startup Revolution Analytics, which sells a commercial version of the popular, and open source, R statistical analysis software. Intel claims that “Revolution Analytics is aligned with Intel’s visual and parallel computing and open source strategies.”

For more on Revolution Analytics, check out this interview with its founder, Norman Nie:

The remaining recipients of Intel’s $24 million investment also fall under the cloud and big data umbrellas to some degree, as well under the increasingly important mobile umbrella. Intel stands to benefit not only from helping companies build cloud infrastructure and applications, but also by helping companies deliver those apps to users’ devices. They include e-commerce Platform-as-a-Service provider IP Commerce; telco analytics specialist Guavus; cloud-based gaming service Gaikai; cloud-based social-gaming personalization startup Swrve New Media; and commercial energy-management provider enLighted.

Swrve’s service is actually an interesting combination of cloud, big data and mobile. Here’s how it describes itself.

Image courtesy of Flickr user zzzack.

The elephant in the gigabit network room

The elephant in the gigabit network room

By Stacey Higginbotham Sep. 9, 2011, 2:00pm PT 4 Comments

Tweet

Getting to gigabit networks isn’t a cheap proposition, and once they are deployed, they generally cost more than the average person can afford. For example, a gigabit connection in Chattanooga, Tenn. one of several towns offering such a service costs more than $300 a month. Even if one can’t get a gig, even a 100 Mbps connection or so can cost about $120 or so. Which means that for most broadband supporters, even ardent ones such as myself, the elephant in the room is: Why spend that much, when for today’s applications, a cable modem offering 12-14 Mbps down will do just fine?

It’s a question that analysts posed of Verizon, when they pressed the company that deployed the nation’s largest fiber-to-home network, about take-up rates and boosting subscribers for FiOS. It’s a question Google seeks to answer with its own plans to build out a gigabit network in Kansas City, Kan. and Kansas City, Miss. And it’s also a question we need to focus more on even as the siren song of mobile connectivity and apps tempts developers to think smaller.

“It’s ironic that the app that is having the most effect and making a big difference is Twitter, which is the most narrow band application imaginable,” says Dane Jasper, CEO of Sonic.Net. “Something similar has to occur in broadband as it gets faster and faster and it gets more ubiquitous.”

Jasper’s ISP is overlaying fiber to the home in Sebastopol, Calif. where it already deployed an ADSL2 network. Subscribers can pay $40 a month for wireline voice and 100 Mbps FTTH broadband, or they can pay $70 for two lines and get a gigabit. Those seem more like the economics that Google is looking for when it sells its network, but until later this year when it should announce pricing, we’re still unsure what it plans to offer.

But tests from Jasper’s initial deployment speak to some problems the industry will need to overcome if we want gigabit networks to become the norm. For starters, there’s the equipment. Computers today aren’t geared up to support gigabit connections and current Wi-Fi networks couldn’t offer those speeds either. Jasper says the first trial of the gigabit network was a speed test on a generic laptop that showed off 420 Mbps down; the laptop couldn’t handle a full gig.

That’s fine, because there aren’t that many applications that need those speeds. Perhaps the most compelling use case I can think if right now is if you wanted to subscribe to a new online backup service and upload your images, music and movies all at once. A gigabit could help you complete the task in minutes as opposed to hours or days. But that’s a one-time kind of benefit — consumers will need everyday benefits if they are going to upgrade their broadband. Yet, network operators have a hard time justifying an investment in a network that will get few subscribers and application developers have little incentive to develop programs for the few on gigabit networks.

So we’re stuck at a point where a gigabit — or even 100 Mbps – sounds awesome, but it’s not exactly worth the prices most companies want (or need to charge). This is why Google’s and Sonic.Net’s plans to expand moderately priced 100 Mbps and gigabit networks will be so important.

“If every consumer has 100 Mbps, we’d have some better applications,” Jasper said. ” At 100 Mbps, high-def video conferencing becomes a reality and you don’t need local storage anymore. You don’t even need local computing.” He pointed me to this awesome video as an example of what might happen, ya’ know, just in case anybody wants to build those next-generation applications.

How Netflix uses WebKit and HTML5 for connected devices

How Netflix uses WebKit and HTML5 for connected devices

By Ryan Lawler Sep. 9, 2011, 2:00pm PT 6 Comments

Tweet

Netflix has spent the past few years trying to get its streaming service on as many devices as possible, including TVs, Blu-ray players, game consoles, streaming set-top boxes, mobile phones and tablets. It’s a monumental undertaking, given the large number of different operating systems and development tools needed across a number of different CE manufacturers.

In a post on the Netflix Tech Blog, device UI engineers Matt McCarthy and Kim Trott shared a presentation that gives a peek behind the curtain in how it’s able to do so. The key is standards-based development using WebKit, JavaScript, HTML5 and CSS3.

Cutting-edge features, dynamic updates and A/B testing

Standardizing on WebKit enables Netflix to deliver nearly identical code to all new Netflix-ready devices. By basing its software development kit (SDK) for consumer electronics manufacturers on WebKit, it can also use “cuttingedge features like querySelector API, CSS transforms, CSS transitions & animations, localStorage, CORS, getters and setters, etc.”

More importantly, by relying on common web technologies, Netflix doesn’t have to develop individual applications for each device framework it comes across. It also can improve development cycles for user interfaces on connected devices. According to the presentation, Netflix typically updates its connected apps every two weeks. By using web standards, it doesn’t need to rely on CE manufacturers to push firmware updates to fix bugs or change the UI; it can do dynamic updates by itself.

Netflix also can dynamically test user interfaces, to see how users interact with them. With web standards, it can redirect different customers to different web pages, feeding them different experiences. By doing so, it can figure out which interfaces are most effective.

Different classes of devices

There’s a wide range of available processing speed and memory, from CPUs that have 600 MHz to those with 3.2 GHz of processing power, and 88 MB of available memory to 512 MB. As a result, Netflix builds in flexibility to ensure it doesn’t overload lower-powered devices with more advanced features.

Netflix currently has three tiers of devices based on their configurations, with the lowest tier having zero animation and small cache sizes. On the top end, devices have animations, large cache sizes and frequent pre-fetching of data. According to the presentation, all devices start in the middle tier and are then throttled up or down based on performance.

Paving the way for other app makers

Netflix’s focus on standards has allowed it to provide dynamically updated user interfaces on devices like the Sony PlayStation 3 and Sony Internet TVs, Nintendo Wii, Logitech Revue and the Boxee Box from D-Link. As more streaming video app makers try to tackle the issue of device fragmentation, Netflix’s approach could provide one way to streamline app creation and maintenance.

Netflix Webkit-Based UI for TV Devices

View more presentations from mattmccarthy_nflx

AT&T to DOJ: T-Mo is full of fail, but we can make it a win!

AT&T to DOJ: T-Mo is full of fail, but we can make it a win!

By Stacey Higginbotham Sep. 9, 2011, 2:57pm PT 3 Comments

Tweet

AT&T filed its response to the Department of Justice’s lawsuit that attempts to stop Ma Bell’s $39 billion acquisition of T-Mobile USA Friday, and the response can be summed up as: T-Mobile is a pathetic loser of a phone company that provides no benefit to society, but if we can take it over, it’s a win for everyone, especially our customers, who will get better service.

From the AT&T response:

Defendants respond that T-Mobile is losing customers and subscriber shares in a growing market, is not a unique or material competitive constraint on AT&T, and will not be one going forward in the absence of this transaction.

Without this merger, AT&T will continue to experience capacity constraints, millions of customers will be deprived of faster and higher quality service, and innovation and infrastructure will be stunted. If this transaction does not close due to Plaintiff’s lawsuit, wireless consumers will, as the FCC Chairman predicts, increasingly face higher prices and lower quality.

Yes, T-Mobile is now a makeover candidate. Of course, if this were a TLC or Bravo makeover show, Ma Bell would be this dashing, debonair phone company that provides awesome service and prepares T-Mobile to do the same, but this is real life. Unfortunately for AT&T, its service has had hiccups and its arguments to the DOJ that those hiccups will continue and consumer prices will rise unless it gets T-Mobile sound more like blackmail than a statement of unmitigated truth. After all, this is the same company that skimped on network upgrades for years and also shared that it planned to stop its LTE network at 80 percent of the U.S. because going further wasn’t economical.

AT&T is trying to portray T-Mobile as a dog that doesn’t act as true competition in the mobile market and may not stick around, but also as a vital source of infrastructure for AT&T, which is in need of the spectrum that only T-Mobile can provide. AT&T also seems to deny basic math in its filing, but I think it’s signalling that it’s both preparing for and will concede to some customer divestitures as part of the merger:

10. Pursuant to the Transaction Agreement, AT&T will acquire T-Mobile for cash and stock worth approximately $39 billion. If this transaction is consummated, AT&T and T-Mobile would become the nation’s largest wireless carrier. The merged firm would have approximately 132 million connections to mobile wireless devices in the United States, with more than $72 billion in mobile wireless telecommunications services revenues.
Response: Defendants admit the allegations in the first two sentences of this paragraph. Defendants deny the remainder of the allegations in this paragraph.

As one reads deeper, AT&T’s denials and refutations of the Justice Department’s arguments get more and more bizarre. AT&T admits that people buy telephone service locally, but denies that consumers take into account the places they work, live and travel to when considering a cell phone provider.

It also denies that it seeks to set prices and plans at a nationwide level based on the pricing plans set by Verizon, Sprint and T-Mobile. That’s despite having said as much three years ago, and despite changing its pricing plans within days of Verizon doing so in terms of offering an unlimited plan, and also following Verizon in raising its early termination fees (side note, Sprint recently also raised its ETF to the same price that AT&T and Verizon have set).

In places where the evidence is particularly damning, such as quotes from internal AT&T documents citing T-Mobile as a competitor, AT&T shoos away those allegations as being taken from unidentified documents and out of context. The entire argument that AT&T is making boils down to an assertion that T-Mobile isn’t a real competitor to AT&T, yet smaller regional players and even new entrants could be, and that the public would love this merger because it would improve AT&T’s ability to offer wireless service.

A merger between AOL and Yahoo: You’ve got fail?

A merger between AOL and Yahoo: You’ve got fail?

By Mathew Ingram Sep. 9, 2011, 3:00pm PT 4 Comments

Tweet

The idea that AOL might want to merge with Yahoo — as it reportedly does, according to a pair of anonymous sources who spoke to Bloomberg News on Friday — isn’t all that surprising. There have been rumors about merger talks between AOL and its fellow former web giant off and on since 2008, with the most recent round coming at the end of last year. The only question left is which analogy you prefer when it comes to picturing this potential combination: two old dogs roped together? Two bricks tied up in the hope that they will float? Two drunks trying to prop each other up as they stagger down the street? Feel free to fill in your own suggestions below.

According to the Bloomberg report, AOL has opened up discussions with Yahoo’s financial advisors, and suggested the two companies could be combined, with AOL CEO Tim Armstrong emerging as the CEO of the new entity after the deal. Yahoo, of course, is currently in need of a chief executive, since it just finished giving its old one — Carol Bartz — the heave-ho earlier this week, something Bartz apparently took with her usual display of tact and grace (there have also been reports that co-founder Jerry Yang might be trying to mount a bid to take Yahoo private).

Is Yahoo playing hard to get?

Within minutes of the Bloomberg report hitting the web, CNBC (c cmcsa) posted to its Twitter account that it had heard from a Yahoo source that the company had no interest in merging with AOL. That doesn’t necessarily mean a deal couldn’t happen, however — Yahoo’s response could be a negotiating tactic, or the board of directors may not want to show its hand publicly at this point. It’s also possible Yahoo could wind up in play whether it wants to be or not. According to the Wall Street Journal,the last AOL bid to merge involved several large private-equity firms who wanted to finance the deal.

Follow @CNBC@CNBC
 

CNBC: Source Close To Yahoo Says No Interest In A Deal With AOL @Yahoo @AOL

about 12 hours ago via SeesmicReplyRetweetFavorite

AOL’s interest in a deal is easy to explain: As we noted in a recent post, Armstrong is likely feeling some heat as a result of the company’s moribund financial performance, and is looking for some kind of way to generate value. Like his former counterpart at Yahoo, he has spent two years cutting costs and trying to generate new lines of business — including the purchase of The Huffington Post for $315 million, and more than $150 million spent on building out the hyperlocal Patch.com network.

But while AOL’s chief executive has continually promised that a turnaround is near, with every quarterly report that goal continues to recede into the distance. Analysts say he either has to split the company up and sell the pieces, or find some other way of generating value — or be removed from the job so someone else can. And while merging with Yahoo probably wouldn’t be a great long-term strategy, it might produce enough of a short-term benefit for the company’s investors that Armstrong could justify doing it.

The proposal that Bloomberg described in its report is similar to the one that AOL was apparently floating in the fall of last year, which would have seen Yahoo either acquire AOL outright (since it is much larger than AOL, with a market capitalization of about $18 billion, compared to AOL’s $1.6 billion), or merge with the former web portal in a complex deal involving the sale of Yahoo’s investment in Chinese web giant Alibaba — which is responsible for much of the company’s current market value.

Memories of 2008 and a rejected Microsoft bid

The idea of an AOL-Yahoo merger was also discussed in 2008, when Microsoft made a $45-billion takeover bid for Yahoo — a bid the web company no doubt wishes it had taken at this point, since its market value is less than a third of that now (something that likely contributed to the poor reviews for former CEO Bartz). Eventually, Microsoft withdrew its bid for Yahoo, and the talks about a merger with AOL never proceeded.

Are the chances of a deal any better this time around? It’s hard to see how. The biggest stumbling block to a merger is the lack of any compelling rationale for doing one, despite efforts to make that case. Both companies are suffering from the same overall problem: They have millions of visitors to their web properties, but the advertising revenue that they are able to generate from that traffic remains unimpressive, and is likely to dwindle. AOL has the added problem of watching its dial-up access business — which generates the majority of its cash flow — continue to disintegrate.

Hence the “two rocks tied together” metaphors. A merger may make Tim Armstrong look (at least briefly) like a success, and possibly catapult him into the CEO’s chair at Yahoo — but combining these two ailing giants isn’t going to magically produce a competitive web business, no matter how many financial architects get their hands on the deal.

Post and thumbnail photos courtesy of Flickr user Hans Gerwitz

LexisNexis open sources code for Hadoop alternative

LexisNexis open sources code for Hadoop alternative

By Derrick Harris Sep. 9, 2011, 3:28pm PT 1 Comment

Tweet

HPCC Systems, the division of LexisNexis Risk Solutions dedicated to big data, has released the open source code of its data-processing-and-delivery software it’s positioning as a better version of Hadoop. The High Performance Computing Cluster code is available on Github, and it marks the commencement of HPCC Systems’ quest to build a community of developers underneath Hadoop’s expansive shadow.

“We’re now really open source,” LexisNexis CTO Armando Escalante told me, responding to early criticism that the company was dragging its feet on releasing the code. He said he’s excited, but nervous because the code is now exposed to reviews and comments after years of operating privately within LexisNexis Risk Solutions.

The HPCC architecture includes the Thor Data Refinery Cluster and the Roxy Rapid Data Delivery Cluster. As I explained when covering the HPCC Systems launch in June, “Thor — so named for its hammer-like approach to solving the problem — crunches, analyzes and indexes huge amounts of data a la Hadoop. Roxie, on the other hand, is more like a traditional relational database or database warehouse that even can serve transactions to a web front end.” Both tools leverage the company’s Enterprise Control Language, which Escalante describes as easier, faster and more efficient than Hadoop MapReduce.

Aside from the open source Community version, HPCC Systems also offers a paid Enterprise version of the HPCC product. The core code is the same, Escalante explained, with the major differences being additional enterprise-grade capabilities such as management tools and support and services.

It will be a tall order to displace Hadoop — which has growing vendor, project and developer ecosystems — but Escalante is confident HPCC can do it. According to Escalante, Hadoop needs a large community because it’s a growing project, whereas HPCC is already mature because it has been serving large customers for a decade. It’s like trying to evolve a microbe into a human being instead of just starting with a human being off the bat. The challenge, he thinks, will be spreading that message to web startups already sold on and experienced with Hadoop.

However, Escalante doesn’t think most enterprises are locked into Hadoop at this point, if they’ve even used it at all. And with its track record and Enterprise Edition features, HPCC is arguably more geared toward enterprises anyhow. For companies spending big money on traditional hardware systems, Escalante says HPCC has to look even better.

“We haven’t killed Hadoop [yet] … but we have killed mainframes,” he explained. By mainframes, he means all the remnant legacy data centers, such as large, expensive storage systems, data warehouses and OLAP systems. Because of Roxie’s capabilities running on commodity hardware, Escalante said LexisNexis was able to get rid of millions of dollars worth of legacy gear. As large enterprise’s data volumes keep growing, he said, they’ll have to pay through the nose to buy traditional systems big enough to handle the load.

With Hadoop, companies must maintain separate data warehouse environments, although startups Hadapt, and to some degree, Platfora, aim to change that.

HPCC Systems, as well as Microsoft with its Dryad project, has an outside chance to steal some of Hadoop’s thunder with developers, but as Escalante acknowledged, its best chance is probably with large customers that will be moved by its enterprise-readiness. HPCC Systems is touting Sandia National Laboratories and the Georgia Tech Research Institute as two big-data-savvy users already sold on HPCC, and Escalante promises some big-name customers wins in the next few months.

Feature image courtesy of Flickr user opensourceway.

Latest Samsung Mobile Phones With Dazzling Looks And Advanced Technology

Samsung mobile phones are being developed, designed and marketed by Samsung Electronic Co., one of the best and leading consumer electronics brands in the universe. Over the years, latest mobile phones of Samsung Electronic have made a different place of their own technology, in the mobile telecommunication field. The latest Samsung mobile phones live up this brand name; they are resourceful in the technologies. Many of the latest mobile phones are designed by technology of Bluetooth and 3g (third generation). Samsung D900, Samsung D800, etc are some of the most wanted mobile phone handset of Samsung own consumer heart

Let us discuss some latest handset in details.

The Samsung D800 is a complete slide-up smartphone with attractive looking and captivating functions. The unremarkable textured magnesium outside, the stylish round shape and good-looking black finish of the mobile phone creates a divergent impression. The style of handset complements the multitude of extremely ground-breaking features. An incorporated mp3 player, 1.3 megapixel digital camera, Bluetooth compatibility, advanced multimedia features; 80 MB storage memory and cool voice dialing are the important key feature that creates handset more popular among the mobile phone users.

Samsung D900 is another fantastic Samsung mobile phone boost outstanding capabilities. This handset is absolutely perfect who like detaining out of the ordinary moment of their lives on stumble. The Samsung D900 mobile phone handset has many features such as 3.13 mega pixel digital camera with flash auto focus and macro mode features, that enable use to capture the right pictures and photos. The handset has over 262k color screen and has ability for video streaming and recording.

Another cool mobile phone handset of Samsung is Samsung E900 a nice look design and quite stylish when it comes to compare other handset. The visual appearance of the smartphone is auxiliary accentuated by an ultra sleek sensitive touch screen keypad.

The handset Samsung E900 designed with numerous advanced multimedia techniques and business features. The 2.0 mega pixel camera with inbuilt CMOS image sensor, 4x digital zoom and a flash make certain right images, every time and any condition. You can easily printed the image in Bluetooth wireless printer that you capture. Another attraction option is inbuilt music player where users can play and listen countless music according to their selections.

You can easily send/receive MMS, SMS and email messages, easy connect wireless network via Bluetooth. You can easily download Java mobile games and used in intelligence and adroitness.

Mobile phone users diagonally different age group and geographic locales have shown curiosity in the Samsung E900. The E900 mobile phone, with a inconspicuous design and multitude of extremely helpful features, has added a new dimension to the mobile familiarity for many smartphone user in assorted parts of the world.

Movable Ink breathes life into e-mail

Movable Ink breathes life into e-mail

By Ryan Kim Sep. 9, 2011, 4:00pm PT 6 Comments

Tweet

E-mail is far from dead, if my packed inbox is any indication. But not being dead isn’t the same as being alive in the way we’ve come to understand the web. While the web is interactive and dynamic, e-mail reflects very little of that evolution.

But a New York company is trying to breathe new life into e-mail marketing by making e-mails real-time and context aware. Movable Ink offers advertisers and publishers a way to push out e-mails that remain current whenever a user opens it instead of existing e-mails that become increasingly irrelevant and stale the longer time goes on.

Movable Ink does this by allowing clients to pull real-time content from their websites and easily embed that information into their e-mails. So whenever someone opens a message, it contains the most up-to-date information as well as contextual data based on a person’s location, time of day or the device they’re viewing the e-mail on. And it can reflect changes that have happened throughout the day.

“E-mail isn’t going away but it should be as real-time as the web,” said Vivek Sharma, co-founder and CEO of Movable Ink. “We’re bring e-mail up to par with all the interesting stuff on the web.”

This can be helpful for daily deal sites, marketers, publishers or anyone with an e-mail relationship with a consumer. For example, a daily deal site can provide a countdown clock within an e-mail and direct people to alternative discounts when one deal sells out. Publishers can include the latest news in a message. Event sites or ticketing services can show a live calendar, how many seats are currently available or who’s RSVPed.

GroupMe, for instance, uses Movable Ink to show the latest conversations in a group chat when new members are joining a group. Daily Candy includes live tweets in its e-mails. Double Cross Vodka includes a map showing the closest place to buy the alcohol. Sharma said daily deal sites are seeing a 33 percent lift on click-through rates and some clients are seeing up to 120 percent increases in click throughs using Movable Ink.

Users not only get the most up-to-date information, but publishers can also engage in A/B testing to see what messages or campaigns are working and update their campaigns on the fly. Sharma said customers can implement Movable Ink within a few minutes and get access to a dashboard tool to build and control their campaigns. Or for more serious developers, they can tap Movable Ink’s API for deeper integration. Sharma said his company has done a lot of back-end work building essentially a custom app server that can deliver real-time messages to any e-mail client.

Movable Ink CEO and co-founder Vivek Sharma

Sharma and co-founder Michael Nutt came up with the idea last year after trying to start a syndicated e-commerce company. Sharma said he realized the bigger opportunity was in fixing e-mail, which is an increasingly important channel for companies to stay in touch with customers. He said while some start-ups have improved the data and delivery of e-mail, there was still an opportunity in attacking the design side.

“It was just an a-ha moment because no one’s really innovating in the e-mail space,” he said. “It turns out real-time content delivery is a real problem and we’re talking to some large e-mail providers to make this is a part of their products.”

It seems like Movable Ink would be a ripe acquisition target. Or perhaps some e-mail companies might look to replicate the technology. But Sharma believes Movable Ink can become a standalone business and is more than just one feature. He’s hoping to host a gallery of real-time content widgets that clients can drop into their e-mails. Movable Ink offers a free version and a pro version that costs $49 a month. The company raised a seed round of $260,000 in March.

I think this makes sense to make e-mails more interactive. Google has done some work to make YouTube videos embeddable in e-mails, but there’s more that can be done to make e-mails much more relevant and current. This could be significant news for services like Groupon, which is trying to get people to interact directly with its website and apps but still relies a lot on e-mails to alert people about deals. Many people sign-up for e-mail alerts but fatigue can set in over time. But if the e-mails are more valuable and current, even if a user doesn’t open it right away, it can still be an important way for a marketer to keep that relationship going with a consumer.

It reminds me of PlayHaven, a mobile marketing provider that I wrote about last week, which makes mobile ads real-time and interactive. They’re showing that there’s still opportunity to bring the dynamism of the web to more products.

10 stories to read this weekend

10 stories to read this weekend

By Om Malik Sep. 10, 2011, 12:00am PT No Comments

Tweet

Back to work, this week has been one full of excitement and news. Amid all the buzz, it is fairly easy to miss out on some of the best (and contrarian) writing that was published over the past few days. Here are  a few stories and a video for you guys to read.

Coming back to work after vacation without chaos and stress. Well, what if you worked all summer? :-)

Do you need a work detox? Heather Mills on why we are in control of our own destiny, especially how we work and live. Powerful essay by this former lawyer.

The emerging global mind: Tiffany Shlain, a well known for her involvement with the Webby Awards writes about how internet-dependence is changing us and how we interact with our world. It is a delightful essay.

Social Media, pretend friends and the lie of false intimacy. The title says it all.

Eight false promises of the Internet.

Dan Ramsden just might be my favorite writer right now. Even if his words are heavy, he has me nodding. Like in this piece: Growth is not actually change and speed is not actually  innovation.

Sharing behavior is pretty much the same on most social networks–be it the new (Twitter), recent (instant messaging) or old (e-mail.)

The plane truth. Why boarding by rows is just stupid.

The art of the note. In the light of the U.S. Postal Service’s well chronicled troubles, my former boss David Churbuck pens an ode to the hand-written note and why it holds a lot more meaning in these digital times.

Duncan Davidson has had enough of the Burning Man festival. Why? It has something to do with cameras and photos.

Salesforce CEO Marc Benioff in conversation with Google Chairman Eric Schmidt.

P.S. I wonder if you have any suggestions/hacks that would allow readers to essentially click a button and get all these articles automatically saved to their Evernote, Instapaper or Readability accounts. If you have thoughts, please leave them in the comments or e-mail me.

The news entrepreneur's dilemma: Which comes first, the money or the audience?

Reporters who would like to escape the annual threat of newsroom layoffs often ask me how they can get started running their own news websites.

They're not so much worried about the technical steps in starting a website business. And the journalism doesn't bother them one bit. What concerns them is the money.

"How can I raise enough money to start my own website when I don't have a website yet?"

It's the chicken and the egg question for the digital age. Which comes first - the money or the website?

Here's the answer - the audience.

Let's remember what people are paying for when they fund a publication. It's not the wonderful journalism or your personal awesomeness as a human being. They're paying for the opportunity to reach your readers. And that's true for non-profit funders and as well as for-profit advertisers. A non-profit has no interest in funding great journalism that no one reads and never makes a difference in any community.

So, as always, it all remains about the audience. Show people with money that you have an audience they wish to reach - and they'll show you their money. Don't, and they won't.

Too often, I've worked with journalists who are looking for funding so that they afford to move to the city where they want to start and run their website. Their career either took them away from their hometown or led them to a city where they just didn't feel the connection that they had elsewhere. It's tough to see the reaction when I tell people this, but no one's going pay for you to relocate and build an audience from scratch. You need to be part of the community you want to cover in order to build an audience from it.

That applies to niche topic websites as well as geographically-focused ones. If you're not part of the community, you're not going to earn the credibility you need among potential readers, as well as advertisers and other funders. One of the bigger mistakes the newspaper industry made was consistently recruiting and hiring reporters from outside their local area, in an effort to broaden the labor pool, which lowers labor costs and boosts profits. That disconnected the newsroom from the communities they covered, and now that they have options, communities won't stand for that any longer.

So start by being part of the community you wish to cover. Then take the next step by building an audience for yourself (not for your current employer, but for yourself). In almost every case, that means you'll need to start your publication before you leave your employer. If you're independently wealthy (or married someone with an income large enough to support you both), you would be able to run a website indefinitely with no income. But I would never recommend living on your savings while you start a site. Don't jeopardize your retirement to start a news business if you can get started while you're earning an income somewhere else.

Some employers are now barring their employees from working on personal websites. How convenient for them. Under the guise of "ethics," these employers are inhibiting the development of potential competition within the market. If that's the case you're in, then publish under a spouse's name or pseudonym while you see if the publication is viable and can begin attracting readers. Or find a lawyer who can look for a loophole through which you can publish. But don't allow your employers to intimidate you into not pursuing your career options at the same time it is intimidating you with the ever-present threat of a layoff.

Start your news website business, then, by building an audience. Keep your expenses as low as possible, then use your voice to engage your readers, building that audience into an involved community. Once you have that in hand, trust me, the money will come to you.

Not dead yet, Symbian getting Microsoft Office apps

Not dead yet, Symbian getting Microsoft Office apps

By Kevin C. Tofel Sep. 9, 2011, 7:53am PT 2 Comments

Tweet

Nokia phones running the new Symbian Belle software will gain more productivity applications in the form of Microsoft Office products. The first suite of tools, called Microsoft Apps, are expected by the end of the year with a widespread release in the beginning of 2012. A planned future update will bring Word, Power Point and Excel to Nokia’s recent smartphones by mid-2012.

Software in the first release will include Microsoft’s Lync 2010, OneNote, PowerPoint Broadcast and Document Connection, which includes SharePoint access. In addition to these and the release of the traditional Office productivity apps, Nokia says it’s also working with Microsoft to integrate mobile device management tools.

For a platform that’s essentially on a long-term death watch, the most progress I’ve seen in Symbian has been after Nokia decided to use Windows Phone for its future smartphones. It’s a bit of a head-scratcher, given that Nokia has publicly committed to support Symbian only through 2016.

Plans can change however, and I wonder if Symbian isn’t quite dead yet. Yes, as of now, Nokia is planning to shore up profits and market share with new Microsoft Windows Phone 7 hardware. But that hasn’t stopped the company from releasing new Symbian handsets, refreshing its old user interface with positive changes and now, adding productivity tools.

Granted, the recently launched handsets were likely already in the production pipeline. But if Nokia continues to invest in new Symbian hardware and major software updates — and sales begin to rise — Symbian could prove to finally be the platform it always had the potential to be for Nokia, a solid supplement to the company’s Windows Phone 7 devices and a backup plan for platform independence.

A Closer Look At The Blackberry Bold 9900 And The HTC Chacha

Mobile phones that incorporate a full Qwerty keypad were once the handset of choice for your typical business person wanting to compose and edit e mails whilst on the move. Recently the phones have been adopted by a younger generation who use instant messaging services such as MSN and Blackberry Messenger. We take a look at the new Blackberry 9900 Bold Touch, the new newest release from the brand that made the full keypad famous and compare it to the HTC Chacha, one of the newest phones sporting the Qwerty design.

Taking a look at both handsets for the first time it is the Blackberry device that really grabs your attention. In terms of design the 9900 Bold Touch is the better looking phone. It looks very much like previous Blackberrys with a small display occupying the top half of the chassis and the full keypad dominating the bottom section. The individual keys are a squared shape which keeps the keypad looking very neat and compact. This layout is in contrast to the more rounded key design on the Chacha which looks less premium and certainly occupies more space. The Blackberry is also a very flat design whereas the HTC positions the screen at a slight angle to the keypad. We prefer the shape of the Blackberry not only for its looks but also from a practical point of view as the phone is likely to fit more comfortably into a small pocket or bag. The 9900 Bold Touch continues to outshine the Chacha when we move away from looks and examine the technical specification of both models. Perhaps the most important area of any phone is its display and both models in this comparison offer a high standard of screen. The Blackberry comes out on top though thanks to a high resolution of 640 x 480 being packed into the models slightly larger 2.8 inch screen. The HTC offers a respectable 480 x 320 resolution on its 2.6 inch display. When we break these two figures down into a pixels per inch rating the Blackberry offers 285 as compared to the 221 available on the HTC.

A vital aspect of the modern smartphone is the procesor power that is on offer. With many phones performing multiple tasks and functions a good procesor chip is necessary to enable the phone to perform these both quickly and efficiently. The Blackberry Bold 9900 really does impress in this area thanks to its Qualcom 8655 processor. At 1.2 Ghz it certainly offers plenty of power and is a full 400Mhz faster than the chip offered on the HTC Chacha. The speed of the Blackberry is also aided by the Adreno 205 graphics processing unit which helps to ensure that complex pages and images do not slow the phone down. The processing power of the Bold Touch is matched by the phones impressive storage capacity. There is plenty of room for all manner of multi media files thanks to an internal capacity of 8GB. This should prove more than adequate for the average user but should you be running a little low on space you always have the option of adding a micro SD card. This phone supports card sizes up to 32GB, giving a total potential storage capacity of 40GB. The Chacha does not fair so well in this area as it only offers 512MB of built in storage but the presence of a micro SD slot is a saving grace.

The Blackberry 9900 Bold Touch really is the Qwerty handset for a new generation. A great blend of power, style and performance makes this one of the most impressive handsets released so far in 2011.

The Amazing Sony Ericsson Play Is An Ideal Portable Gaming Handset

The Sony Ericsson Play has recently hit the shelves, and provides a great solution for those who are into their gaming. With console like graphics and controls, it leads the field in this area, but the handset also has the specifications to match and indeed outdo many competing smartphones in other areas.

In this article I will take a closer look at the key features of the Sony Ericsson Play, and try to find out if this is a gimmick, or the real deal.

Gaming Features

Obviously one of the major draws of this handset is its credentials as a gaming device. On the surface, the device looks like any other handset in the Sony Ericsson Xperia range, but one you slide out the PlayStation certified control pad from beneath the screen; you begin to get a picture of what this phone is made for. The controls are laid out in a similar fashion to those of the original console controller, with the classic buttons and direction keys, and even touch sensitive analogue controls replacing the L3 and R3 keys which millions of PlayStation users will be familiar with. The 4 inch LED-backlit LCD capacitive touchscreen is certainly up to the job, with superb graphics provided by the 1 GHz processor, and the ability to display up to 16 million colours within its high resolution. There are several classic game titles included as standard, and there are of course many more available for download from the Android Market, many of which are optimised for this particular device.

Other Specifications

The gaming features of this phone are certainly impressive, and you could be forgiven for thinking that the handset would lack is other areas. This is far from the truth though, as the spec list reads largely the same as its Xperia stablemates the Arc and Neo. Android OS (v2.3) is the operating system of choice, and there is a 5 megapixel camera with flash and HD video capture, as well as several options for connection to the internet based upon location and available network connectivity. There is plenty of onboard storage thanks to a pre-installed 8 GB memory card, and this can be replaced with a card o up to 32 GB, providing ample space for games, downloaded apps, photos, video and music tracks.

As you can see, the Sony Ericsson Play provides everything you would expect from a modern smartphone, but with the added bonus of some great gaming features. Therefore, this handset would prove ideal for those who are into their gaming but don't want to carry a separate device such as a PSP. You may not be an avid gamer, but still appreciate the features of this phone. Perhaps you have a long, boring commute, or need something to liven up your lunch break at work, and need a device to keep you entertained. The beauty of this device is that once you are done playing games, you can simply slide the control pad back into place, and you have a fully functioning Android smartphone. Therefore, the appeal of a device like this may be more widespread than first meets the eye.

10 Ways to Save Your iPhone 4 Battery Life

Recently we shared tips to. Well, let's assume that fix iPhone 4 battery drain issue after iOS 4.3 updateOS 4.3 update is not the case. You have had iPhone 4 for a while now and you notice that your iPhone battery doesn't last quite as long as it used to. So what you can do to improve iPhone 4 battery life? Here we are going to share some tips that will help you to reduce battery drain.

The iPhone 4 battery is supposedly 20% better than the 3GS and I still can't go one full day without having to plug my iPhone in. I realized that I'm not the only one having this battery issue so I figured I'd post a few tips to maximize the iPhone battery life.

Here there are 10 easy ways to save your Iphone 4 battery Life.Battery si the very important part of any electronic system.saving battery life means saving life of your electronic system.

1.Make sure you kill those apps that are running in the background for nothing.

2.Turn off wifi if you're not using it (Settings > Wi-Fi and set Wi-Fi to Off).

3.Turn off 3G if you don't need it (Settings > General > Network and set Enable 3G to Off) Turn off Push Notifications (Settings > Notifications and set Notifications to Off)

4.Turn off Bluetooth (Settings > General > Bluetooth and set Bluetooth to Off)

5.Turn off Push Email (Settings > Mail, Contacts, Calendars > Fetch New Data and set Push to Off) 6.Fetch new data manually (Settings > Mail, Contacts, Calendars > Fetch New Data and tap Manually).

7.Go through a full charge cycle each month (charge the battery to 100% and completely run it down)

8.Turn off vibrate.

9.Auto-adjust Brightness (Settings > Brightness and set Auto-Brightness to On).

10.Optional: turn your iPhone 4 off (just kidding).

I think there ways to save your iphone 4 battery life will be helpful.I really don;t know about the setting of other versions of Iphones.But I hope they will be not so much different than iphone 4 settings. I have made a very good blog about all the informations about the Iphone related news.If you have enough time and you really need to know more about the purchasing of your new iphone then feel free to visit my blog iloveiphoneandipad.blogspot.com .

Do you have any other tips to add to this above list? Feel free to share your tips by adding a comment here.

A Closer Look At The Stunning Blackberry Torch 9860

Following on from the success of the original 9800, the new Blackberry Torch 9860 look set for success thanks to several performance enhancements, a stylish new look and a large touchscreen.

In this article I will provide an overview of the key features of this stylish new handset.

The handset itself measures up at 120x 62x 11.5mm, and weighs in at 135 grams, so it is reasonably slim and lightweight, although it is a world away from the likes of handsets such as the Samsung Galaxy S2. That said, it remains pocket friendly and a comfortable phone to hold and use.

One of the most noticeably differences between this and the 9800 Torch is the lack of the slide-out Qwerty keyboard. RIM has instead opted for a 3.7 inch TFT capacitive touchscreen, which dominates the front of the handset. With a resolution of 480x 800, it offers display quality which can compete with the majority of its competition. As an additional means of navigating the new and improved Blackberry OS 7.0, an optical trackpad familiar to swathes of Blackberry users is also included as standard.

A generous 4GB of internal storage is provided as standard, so whether you are a business user or student, or a multimedia enthusiast there is ample storage to keep numerous file types with you at all times, with a microSD slot also included, which is on hand to accommodate a memory card of up to 32 GB, expanding the available storage considerably.

As with any smartphone, and Blackberry handset in particular, users can expect a great internet browsing experience from the Blackberry Torch 9860. This is thanks to an HSDPA connection which kicks in when in areas with 3G coverage, providing speeds of up to 14.4 Mbps, which is on par with many popular smartphones such as the HTC Desire HD. Also, when users have the appropriate access to wireless networks in the home, office or public location, Wi-Fi is also on hand to provide a fast and secure browsing experience with the added bonus of being free of charge.

As a camera phone, the Blackberry Torch 9860 is well equipped to take top quality still images as well as video footage. With a resolution of 2592x 1944 (5 megapixels) users can expect high quality images with great levels of clarity and superb colour rendering. Continuous autofocus, image stabilisation and face detection all play their part in ensuring the resulting images come out as expected, and ensure that taking the perfect picture is as fuss free as possible. Video capture is in 720p quality (high definition), so users can expect great results from their handiwork.

The new and improved Blackberry OS 7.0 works alongside a powerful 1.2 GHz processor, meaning that the interface, as well as applications and certain aspects of the hardware open and run quickly and smoothly.

The Blackberry Torch 9860 is one of several new smartphones which seem to have ushered in a new era for RIM. The specifications of handsets such as the Blackberry Torch 9860 offer genuine competition to established smartphone brands such as Apple and HTC, as well as offering fresh appeal to the many existing Blackberry fans.

What is journalism worth?

What is journalism worth? That's the question journalism managers and entrepreneurs have been trying to figure out ever since it became clear, years ago, that the Internet was disrupting local publishing monopolies.

And so we've endured years of conference panels, email exchanges, and blog posts about paywalls and paid content strategies, as publishers try to figure out exactly how much people are now willing to pay for news content.

Lost in this is the realization that people have been telling us - for generations - how much they're willing to pay for news.

Start with newspapers. For most of my life, newspapers cost 25 or 50 cents per daily copy. Think how many stories appeared in each of those papers - perhaps a dozen or so staff-written stories at smaller papers, up to several dozen or more at a major metro. Add in the wire stories and syndicated features, too, and we're talking about hundreds of items of content in each daily paper.

Now consider that the cost of the daily paper included home delivery of a physical copy and usually included a fair number of coupons, too. Subtract the value of the delivery, the copy on paper and the coupons. How much of that 25 or 50 cents is left? Not much. Divide that paltry remainder by the number of items of content in that paper. It ought to be clear that the marginal value to a consumer of each newspaper story is pretty much zero.

Let's think about magazine stories. Magazines cost more, from a couple bucks to several dollars a copy. And they include fewer, though often longer and more in-depth, stories. Again, you've got the benefits of home delivery and a copy on paper (but typically not so many coupons as a newspaper, if any). Once you subtract the value of that delivery and the paper copy, you're left with a much more than you had after you subtracted the same from the cost of a single issue of the newspaper. Divide that remainder by the number of stories in the magazine and you will probably find that each magazine article has some value - though it is small, ranging from a few cents to closer to a dollar for exceptional examples.

Still, it's better than the newspaper articles' value.

Now let's consider books. A typical non-fiction book retails between $10 and $30 and usually has just one item of content within - the narrative of the book itself. (Anthologies are different, of course.) Again, you have the value of the printed copy, but there's no home delivery (if there were, you paid extra for it) and almost always no coupons. Therefore, the marginal value of the content in a book is substantial - several dollars per work.

That's the way it's been in the past, and if you're willing to face facts, that's the way it remains today.

Consumers have told us what they believe the value of journalism to be. And in a market economy, consumers' word is law. Incremental, commodity daily news reports have close to no cash value to the consumer. Longer, more in-depth magazine-style pieces have small but significant value, but almost always under a dollar and usually just a few cents.

Only book-length journalism has substantial per-unit value, in excess of $1 and often much more.

That is why I've been writing so much about eBooks lately. Incremental daily journalism traditionally has had no financial value to a publisher beyond its value as a vehicle for advertising. Magazine-length journalism has had some income value, but typically has relied upon a healthy amount of advertising income as well. Only book-length journalism has been able to rely consistently upon the income from its consumer value.

As Internet competition has cut the price of advertising, it has cut the income of publications - such as newspapers and many magazines - that are dependent upon the value of that advertising. But what the Internet took away from journalism in newspapers and magazines, it is giving back in books. Journalists who can produce book-length-and-quality work now have unprecedented ability to publish directly to a global marketplace. And the collapse in advertising revenue is not affecting them one bit.

Yes, there's more competition in the book publishing space, too. But 1,000 eBook readers deliver a heck of a lot more income to a writer than 1,000 blog or newspaper website readers. If the journalism industry is going to keep professional reporters employed, books and eBooks are going to have to play a much larger role in this industry than they have in the past.