Saturday, December 11, 2010

Its being a while...............

Many things happened recently.........some good some bad.

Recently also got to love this phrase "路遥知马力,日久见人心"

Wednesday, December 02, 2009

Sharepoint Tutorial

http://dotnetguts.blogspot.com/2009/03/sharepoint-videos-for-beginner-step-by.html


http://www.microsoft.com/click/SharePointDeveloper/


http://social.technet.microsoft.com/Forums/en-US/sharepointgeneral/thread/39bef6bf-c247-478d-8309-f66050960fc8


http://blog.sharepointhosting.com/Downloads/SharePoint-Tutorials.aspx

Tuesday, December 01, 2009

How to change default gateway on windows by command line

Right, you can change you default gateway on windows by route command , it’s very useful. Other than , you can add route to other network by this command too. I have example for you too

Example:

To display the entire contents of the IP routing table, type:

route print

To display the routes in the IP routing table that begin with 10., type:

route print 10.*

To add a default route with the default gateway address of 192.168.12.1, type:

route add 0.0.0.0 mask 0.0.0.0 192.168.12.1

To add a route to the destination 10.41.0.0 with the subnet mask of 255.255.0.0 and the next hop address of 10.27.0.1, type:

route add 10.41.0.0 mask 255.255.0.0 10.27.0.1

To add a persistent route to the destination 10.41.0.0 with the subnet mask of 255.255.0.0 and the next hop address of 10.27.0.1, type:

route -p add 10.41.0.0 mask 255.255.0.0 10.27.0.1

To add a route to the destination 10.41.0.0 with the subnet mask of 255.255.0.0, the next hop address of 10.27.0.1, and the cost metric of 7, type:

route add 10.41.0.0 mask 255.255.0.0 10.27.0.1 metric 7

To add a route to the destination 10.41.0.0 with the subnet mask of 255.255.0.0, the next hop address of 10.27.0.1, and using the interface index 0×3, type:

route add 10.41.0.0 mask 255.255.0.0 10.27.0.1 if 0×3

To delete the route to the destination 10.41.0.0 with the subnet mask of 255.255.0.0, type:

route delete 10.41.0.0 mask 255.255.0.0

To delete all routes in the IP routing table that begin with 10., type:

route delete 10.*

To change the next hop address of the route with the destination of 10.41.0.0 and the subnet mask of 255.255.0.0 from 10.27.0.1 to 10.27.0.25, type:

route change 10.41.0.0 mask 255.255.0.0 10.27.0.25


Monday, November 23, 2009

InfoWorld's top 10 emerging enterprise technologies

By InfoWorld staff | InfoWorld

We all know what buzz is: It's noise. And here at InfoWorld, one of our self-appointed tasks is to extract the signal from that noise, to separate the stuff valuable to IT professionals from that which is popularly considered a big deal.

This mandate is the inspiration behind InfoWorld's list of top 10 emerging enterprise technologies of 2009. We believe this is an amazing time in IT, with a swarm of new technologies that have the potential to reduce costs, change the way we work, and open up new frontiers. So we decided to brush aside the high-level trends trumpeted by analysts and ask ourselves: Which enterprise technologies shipping now, but not yet widely adopted, will have the greatest impact?

[ See InfoWorld's top 10 emerging technologies of 2009 in our slideshow. ]

The result is the collection of actual, vapor-free technologies you find here. In case you're wondering, we used no scientific method in our selection process (other than drawing on the endeavors of the InfoWorld Test Center for inspiration). Our list is based entirely on the collective judgment of InfoWorld editors and contributors.

We have purposely avoided specific product mentions or recommendations, because we have set our sites on long-term potential rather than current implementation (for the year's best products, check our next Technology of the Year Awards, for which the 2010 edition is only six weeks away). If it's your job to concoct your organization's technology strategy and decide where to place your bets, then our top 10 emerging enterprise technologies is for you.

[ For recommendations on the best enterprise products, check out InfoWorld's Technology of the Year Awards. ]

Narrowing down the candidates to the final 10 wasn't easy, especially after we received so many excellent responses to the call for nominations we made in September. We don't expect you to approve of all of our selections. In fact, if history is any guide, a certain number of you will disagree violently with our conclusions.

Well, that's what article comments are for. We welcome your input and invite you to join the discussion. But if you're looking for broad pronouncements on "the cloud" or other fashion statements, allow us to refer you to the latest edition of Gartner's Hype Cycle. Everyone else: Read on.

10. Whitelisting
Keeping up with malware signatures is becoming unsustainable. In 2008, for example, Symantec put out more antivirus signatures than it did in the company's previous 17 years of existence. Not only are there more viruses, worms, and Trojans, but an increasing number have the ability to morph into variations that avoid signature detection or cloak themselves using encryption.

Ultimately, the only answer to the increasing proliferation and sophistication of malware may be whitelisting, where the only executables that can run on a system are known, good executables.

[ In the InfoWorld Test Center review "Whitelisting security offers salvation," Roger Grimes tries out five whitelisting products and turns up a clear winner. ]

Whitelisting starts with a clean, malware-free image of a desktop or server. Then whitelisting software is run to uniquely identify files using one or more cryptographic hashes. Thereafter, monitoring agents on managed systems flag the presence of any executables not on the hash list or prevent them from running. Most companies distribute standard system images across the enterprise, so whitelisting can be an extremely efficient way to lock down security.

Some whitelisting software can fingerprint and block a wider range of files than executables, including scripts and macro modules, and even write-protect any text or configuration file. The latter is useful for noting unauthorized modifications, such as the changes that many malware programs make to the DNS Hosts file.

Obviously, whitelisting requires a cultural shift. In many enterprises today, users still have some measure of control over what they run on their own desktop or laptop computers. But due to the relentless ramp-up in new and smarter malware -- and the increased involvement of organized crime in malware-based attacks -- whitelisting may be our only hope in the losing battle over enterprise security.

-- Eric Knorr

9. Cross-platform mobile app dev
The iPhone boom has brought many things to programmers beyond the urge to simulate bodily functions with apps like iFart. The most enduring legacy is familiarity with Objective C, a language first introduced with Steve Jobs' NeXT Computer in 1988.

If you're a Java programmer, learning Objective C means figuring out how to handle memory allocations for yourself. If you're a JavaScript jock, you must grasp the concept of a compiler. There is no other choice if you want to write code that can be downloaded by millions of iPhone owners.

[ For a deeper look at mobile app dev, check out Peter Wayner's Test Center articles, "The cross-platform option: Web apps for smartphones" and "iPhone development tools that work the way you do."]

The days of the iPhone being the only attractive game in town are fading fast, though, as developers check out the competition. Android and BlackBerry phones speak Java. Nokia's Symbian OS runs many languages, including several versions of C and C++. Numerous developers look at this stew and see a common theme: All of the phones come with Web browsers, which, in turn, speak HTML, JavaScript, and CSS. Palm's new Pre wants its apps written in these Web languages from the start.

So why not start with something simple written in the languages spoken by every Web developer? When I built Web versions of my book, "Free for All," I added a special markup that let the iPhone install the Web page as if it were a regular app. All of this code will work on other WebKit-enabled browsers, like the one in Android, and it's not hard to make it work on the BlackBerry.

Some dev kits are moving beyond the browser to provide better access to deeper corners of the API. Appcelerator's Titanium Architecture, Nitobi's PhoneGap, and the LiquidGear fork of PhoneGap build apps for the major platforms that are ostensibly native but rely upon creating an embedded version of the browser. Most of the important logic is crafted in JavaScript, which runs inside the embedded browser. The code has access to the accelerometer and the GPS even though it's just JavaScript.

Others are porting popular languages like Ruby. The Rhomobile tool, for instance, embeds a complete Ruby interpreter and Web server inside your app so that you can write everything in Ruby. The folks at Apple forced them to remove the eval function because it hurt their ability to completely test each app, but aside from that, it's like building a Web site in Ruby. The code runs on the major platforms.

All of these approaches are surprisingly good -- if you're not looking for superfast performance or perfection. Game developers can use the accelerometer with these apps, but only to build simpler, two-dimensional games that don't need access to the deepest levels of the video hardware. Fonts and layouts are sometimes just a bit different from platform to platform, and this can be annoying. But if your requirements are simple and you already know Web development languages, these approaches are much easier than learning Objective C.

For enterprises, cross-platform app dev eliminates a key barrier to developing and deploying mobile applications developed in-house. It's difficult to mandate that all employees use the same smartphone, and even if you could, coding your apps for a specific platform locks you in. With cross-platform app dev, you can write it once -- without having to learn the quirks of a specific platform -- and run it across many devices. At last, widespread deployment of mobile enterprise applications may become a reality.

--Peter Wayner

8. Hardware power conservation
We all know the "two kinds of green" cliché: Save the planet and save money by reducing power consumption. The technologies to accomplish that dual purpose have already found their way into servers, desktops, and other hardware, but in some cases, the benefits will accrue only as better software support emerges.

More efficient power supplies, along with hard drives that reduce speed or shut themselves off when they aren't needed, are delivering the goods right now. But in order to "park" inactive cores and motherboards or other components that go to sleep, multicore CPUs generally need to be told to do so at the OS or application level.

[ Policy and practices play an even bigger role in power conservation than hardware. See InfoWorld's "10 power-saving myths debunked." ]

Power supplies are the simplest way to save energy. They need no software support and produce a double savings; they waste less electricity in the AC-to-DC conversion process and produce less heat -- reducing the power required for cooling. The 80 Plus certification program, funded by a consortium of electric utilities, provides incentives for manufacturers to produce power supplies that are at least 80 percent efficient, a jump from old units that went as low as 50 percent -- that is, only 50 percent of the power reaches the motherboard. The other 50 percent generally dissipates as heat.

Several storage vendors produce hard drives that can spin down or power off when not in use. Most of the systems shipping now limit the functionality to slowing down drives, since the time required to spin up or shut down a drive is longer than most applications support. There are generally three levels of power savings, each conserving more power and requiring more time to return to full functionality; think of them as slow, slower, and off. The first state can be recovered from in 1 to 2 seconds and the second in less than 30 seconds, while recovery from the powered-off state can take as long as two minutes. The latter causes problems with most applications, so most vendors don't use it.

The latest CPUs support core parking, powering down cores that aren't needed when loads are light. The feature is supported in Windows 7 and Windows Server 2008 R2. It's most useful in servers that are intermittently loaded or lightly used outside of business hours. A two-, four-, six-, or eight-core processor can shut down all but one core and still respond to requests, and return to full functionality if the load on the single core increases beyond a set limit.

Motherboards and add-ons such as network interface cards are introducing the capability to power down components when not in use. For example, some motherboards, particularly laptop systems, support two video systems: one built into the motherboard and one discrete. The built-in adapter uses less power, while the discrete one offers higher performance. The motherboard can switch between the two as necessary to offer either power savings or high performance.

Network interface cards can shut down when the network is not in use, and other components are adding similar capabilities. But until these features are supported by the operating system -- and, in some cases, individual applications -- they are of little use. It's great to have a NIC that powers itself down, but you need an operating system that can power the thing up again.

-- Logan Harbaugh

7. Many-core chips
The major processor vendors have hit the wall with clock speed. Each new increment in clock ticks draws so much additional power, any true performance boost incurs the wattage and heat output of, say, an electric iron.

So the manufacturers have switched from a single-lane autobahn to multilane highways -- that is, from a single, fast core to multiple slower cores that execute code in parallel. Breakneck speed is no longer the holy grail of computing. Instead, it's total throughput.

Chips with multiple cores consume less power, generate less heat, and complete work very efficiently. On servers, they are exactly what IT likes. Today, for example, an Intel Nehalem processor has four cores, each of which can run two threads simultaneously, so on a quad-processor system -- an inexpensive box -- 32 threads can run simultaneously. Five years ago, only mainframes and very high-end servers could deliver that kind of scalability. Today, it's run-of-the-mill.

[ The benefits of multicore hardware depend on multithreaded software. See Andrew Binstock's Test Center article, "Windows 7 on multicore: How much faster?" ]

Multicore chips have had less impact on desktop computing, due to the lack of applications that can make good use of the parallel resources -- not to mention the lack of programmers skilled in writing multithreaded desktop software. That's changing, however, especially in workstation applications and in graphics apps aimed at power users.

The next decade will see an explosion of cores in new chips. This era, dubbed "many core" -- a term that refers to more than eight cores -- is set to break out shortly. Intel, for example, has already shown working demos of a chip from its Tera-scale project that contains 80 cores and is capable of 1 teraflop using only 62 watts of power. (To put that in perspective, note that a system capable of 18 teraflops would qualify for the current list of the top 500 supercomputers.)


Non-x86 processor vendors are also deeply involved in this fray. For example, Tilera currently sells a 16-core chip and expects to ship a 100-core monster in 2010. What will IT do with so many cores? In the case of Tilera, the chips go into videoconferencing equipment enabling multiple simultaneous video streams at HD quality. In the case of Intel, the many cores enable the company to explore new forms of computing on a single processor, such as doing graphics from within the CPU. On servers, the many-core era will enable huge scalability and provide platforms that can easily run hundreds of virtual machines at full speed.

It's clear the many-core era -- which will surely evolve into the kilo- and megacore epoch -- will enable us to perform large-scale operations with ease and at low cost, while enabling true supercomputing on inexpensive PCs.

-- Andrew Binstock

6. Solid-state drives
SSDs (solid-state drives) have been around since the last century, but recently, we've seen an explosion of new products and a dramatic drop in SSD prices. In the past, SSDs have been used primarily for applications that demand the highest possible performance. Today we're seeing wider adoption, with SSDs being used as external caches to improve performance in a range of applications. Gigabyte for gigabyte, SSDs are still a lot more expensive than disk, but they are cheaper than piling on internal server memory.

Compared to hard drives, SSDs are not only faster for both reads and writes, they also support higher transfer rates and consume less power. On the downside, SSDs have limited life spans, because each cell in an SSD supports a limited number of writes.

[ Wondering where SSDs fit into your datacenter architecture? See "Four considerations for SSD deployment." ]

There are two types of SSDs: single-level cell (SLC) and multilevel cell (MLC). SLCs are faster than MLCs and last as much as 10 times longer (and, as you might imagine, cost a lot more). Write endurance has been a big barrier to SSDs, but increasing write specs and the smarter use of built-in DRAM caches are making the value proposition more attractive. Some manufacturers increase the longevity of drives by adding more actual capacity than the stated capacity, and they use wear-leveling algorithms to spread data over the extra cells.

But the most dramatic story is pricing. A 32GB SSD has gone from over $1,000 to under $100 in the last five years, though this is still about 46 times as expensive as a SATA drive in dollars per gigabyte. As new solutions to the wear problem emerge from the lab, we expect SSD adoption to accelerate even more, as the hunger for high performance in cloud computing and other widely shared applications increases.

-- Logan Harbaugh

5. NoSQL databases
Data is flowing everywhere like never before. And the days when "SQL" and "database" were interchangeable are fading fast, in part because old-fashioned relational databases can't handle the flood of data from Web 2.0 apps.

The hottest Web sites are spewing out terabytes of data that bear little resemblance to the rows and columns of numbers from the accounting department. Instead, the details of traffic are stored in flat files and analyzed by cron jobs running late at night. Diving into and browsing this data require a way to search for and collate information, which a relational database might be able to handle if it weren't so overloaded with mechanisms to keep the data consistent in even the worst possible cases.

[ In InfoWorld's "Slacker databases break all the old rules," Peter Wayner reviews four NoSQL databases: Amazon SimpleDB, CouchDB, Google App Engine, and Persevere of NoSQL. ]

Sure, you can make anything fit into a relational database with enough work, but that means you're paying for all of the sophisticated locking and rollback mechanisms developed for the accounting department to keep track of money. Unless the problem requires all of the sophistication and assurance of a top-of-the-line database, there's no need to invest in that overhead, or suffer its performance consequences.

The solution? Relax the strictures and come up with a new approach: NoSQL. Basic NoSQL databases are simple key/value pairs that bind together a key with a pile of attributes. There's no table filled with blank columns and no problem adding new ad hoc tags or values to each item. Transactions are optional.

Today's NoSQL solutions include Project Voldemort, Cassandra, Dynamite, HBase, Hypertable, CouchDB, and MongoDB, and it seems like more are appearing every day. Each offers slightly different ways to access the data. CouchDB, for instance, wants you to write your query as a JavaScript function. MongoDB has included sharding -- where a large database is broken into pieces and distributed across multiple servers -- from the beginning.

Simple key/value pairs are just the start. Neo4J, for instance, offers a graph database that uses queries that are really routines for wandering around a network. If you want the names of the dogs of all of the friends of a friend, the query takes only a few lines to code.

The real game is keeping the features that are necessary while avoiding the ones that aren't. Project Cassandra, for instance, promises to offer consistent answers "eventually," which may be several seconds in a heavily loaded system. Neo4J requires the addition Lucene or some other indexing package if you want to look for particular nodes by name or content because Neo4J will only help you search through the network itself.

All of these new projects are just the latest to rediscover the speed that might be found by relaxing requirements. Look for more adjustments that relax the rules while enhancing backward compatibility and ease-of-use. And expect a new era of data processing like nothing we've experienced before.

-- Peter Wayner

4. I/O virtualization
I/O virtualization addresses an issue that plagues servers running virtualization software such as VMware or Microsoft Hyper-V. When a large number of virtual machines runs on a single server, I/O becomes a critical bottleneck, both for VM communication with the network and for connecting VMs to storage on the back end. I/O virtualization not only makes it easier to allocate bandwidth across multiple VMs on a single server, it paves the way to dynamically managing the connections between pools of physical servers and pools of storage.

But let's start with the individual server. Take, for example, VMware's recommendation to allocate one gigabit Ethernet port per VM. A server that supports 16 VMs would therefore need four four-port gigabit Ethernet NICs, plus additional Ethernet (iSCSI), SCSI, or Fibre Channel adapters for the necessary storage. Many servers don't have enough empty slots to support that many adapters, even if the cooling capacity were adequate. And 16 VMs per host is barely pushing it, considering that today's Intel and AMD servers pack anywhere from 8 to 24 cores and support hundreds of gigabytes of RAM. Consolidation ratios can go much higher.

[ I/O virtualization is key to the highly scalable architecture of Cisco's Unified Computing System. See "Test Center review: Cisco UCS wows." ]

In response, I/O virtualization vendors such as Xsigo and Cisco have come up with a way to give each server one very high-speed connection instead of multiple Ethernet and Fibre Channel connections. One adapter per server can then provide many virtual connections. These adapters are not custom HBAs, but standard 10 gigabit InfiniBand or Ethernet adapters used with drivers in the OS that let the OS treat the single fast connection as multiple network and storage connections. Since everything is running over a single pipe, the system can grant bandwidth to the virtual connections as needed, providing maximum performance where appropriate.

Typically, a single adapter resides in each server, connected by a single cable to the appliance or switch, which then provides both network and storage ports to connect to storage and other networks. This simplifies datacenter cabling, as well as the installation of each server. It also eases the task of transferring adapters to another system if a server fails. In solutions such as Cisco UCS, I/O virtualization makes server provisioning, repurposing, and failover extremely flexible and potentially completely automated, as it's handled entirely in software. Further, because the I/O virtualization systems can emulate either multiple Ethernet or Fibre Channel connections running at varying speeds, available bandwidth can be tailored to the requirements of VM migration or other heavy loads.

Virtualizing I/O does require drivers that support the specific OS in use. The major operating systems and virtualization platforms are supported, including VMware ESX and Windows Server 2008 Hyper-V, but not necessarily all versions of Linux and Xen or other open source virtualization platforms. If you're using supported OSes, I/O virtualization can make running a large datacenter much simpler and far less expensive, particularly as increased processing power and memory support allow servers to handle vaster numbers of virtual machines.

-- Logan Harbaugh

3. Data deduplication
Data is the lifeblood of any business. The problem is what to do with all of it. According to IDC, data in the enterprise doubles every 18 months, straining storage systems to the point of collapse. The blame for this bloat often falls on compliance regulations that mandate the retention of gobs of messages and documents. More significant, though, is that there's no expiration date on business value. Analyzing data dating back years allows users to discover trends, create forecasts, predict customer behavior, and more.

Surely here must be a way to reduce the immense storage footprint of all of this data, without sacrificing useful information. And there is, thanks to a technology known as data deduplication.

Every network contains masses of duplicate data, from multiple backup sets to thousands of copies of the employee handbook to identical file attachments sitting on the same e-mail server. The basic idea of data deduplication is to locate duplicate copies of the same file and eliminate all but one original copy. Each duplicate is replaced by a simple placeholder pointing to the original. When users request a file, the placeholder directs them to the original and they never know the difference.

Deduplication takes several forms, from simple file-to-file detection to more advanced methods of looking inside files at the block or byte level. Basically, dedupe software works by analyzing a chunk of data, be it a block, a series of bits, or the entire file. This chunk is run through an algorithm to create a unique hash. If the hash is already in the index, that means that chunk of data is a duplicate and doesn't need to be stored again. If not, the hash is added to the index, and so on.

Data deduplication isn't just for data stored in a file or mail system. The benefits in backup situations, especially with regard to disaster recovery, are massive. On a daily basis, the percentage of changed data is relatively small. When transferring a backup set to another datacenter over the WAN, there's no need to move the same bytes each and every night. Use deduplication and you vastly reduce the backup size. WAN bandwidth usage goes down and disaster recovery ability goes up.

More and more backup products are incorporating data deduplication, and deduplication appliances have been maturing over the past few years. File system deduplication is on its way too. When it comes to solving real-world IT problems, few technologies have a greater impact than data deduplications.

-- Keith Schultz

2. Desktop virtualization
Desktop virtualization has been with us in one form or another seemingly forever. You could probably even say that it's been emerging since the mid-1990s. But there's more to desktop virtualization today than most of us could have imagined even two or three years ago. Yet another milestone is just around the corner: truly emergent technology in the guise of the desktop hypervisor.

Long the leader in this space, Citrix System's XenApp and XenDesktop are examples of how desktop virtualization just might put a desktop server farm in every datacenter and a thin client on every desktop. XenDesktop weaves together all the prevalent desktop and application virtualization technologies into a single package: traditional application and desktop sessions, application streaming, and VDI (Virtual Desktop Infrastructure). No matter which way you turn, the detriments of each is generally backed up by the benefits of another.

[ Desktop virtualization, three ways: Check out InfoWorld's detailed evaluation of VMware View, Citrix XenDesktop, and Citrix XenApp. ]

The client hypervisor takes desktop virtualization the last mile. Picture each desktop running its own bare-metal virtualization layer that abstracts the baseline hardware to whatever VM you wish to push to the desktop, where it can be centrally managed, synced with a mirror on a server, and easily replaced (or even reset by the user) when things go wrong. Citrix isn't alone with this concept -- VMware is developing a similar solution, and both promise to hit the market in 2010.

Regardless of what solutions are available today and what solutions may be on the horizon, enterprise desktop management remains one of the biggest points of pain in any organization. While the model for datacenter architecture has changed systemically in the past 20 years, the model for deploying desktops hasn't. In most places, it's still one fat box per user, with some mishmash of management tools layered across the top to protect the users from themselves and protect the network from the users.

Whether any of the desktop virtualization technologies are applicable to your enterprise is wholly dependent on the nature of the business. Call center and health care treatment room terminals are a relative no-brainer, but you can quickly run into problems with noncompliant applications in other implementations. As the blend of desktop virtualization technologies reaches a critical mass, the wide variety of ways to ship a Start menu to a user offers a better chance that at least one will apply in every instance. Certainly, if the world turns its back on fat clients at every desk, IT will be a happier place. As for the users, the client hypervisor may give both IT and the most ardent fat client holdouts what they need.

--Paul Venezia

1. MapReduce
Why on earth would InfoWorld pick a programming framework for distributed data processing as the most important emerging technology of 2009? Because MapReduce enables enterprises to plunge into analyzing undreamed of quantities of data at commodity prices, a capability that promises to change business forever.

IDC has predicted a tenfold growth in digital information between 2006 and 2011, from just under 180 exabytes to 1,800 exabytes (that's 1 trillion and 800 billion gigabytes!). This explosion represents a challenge, of course (how to store, retrieve, and archive all that data), but also a huge opportunity for enterprises. After all, everything in that sea of data is potentially information -- information that could be used to guide business decisions.

Until recently, enterprises that might want to process petabytes of independent data to find business-relevant relationships would need an extremely good reason to invest in such a venture; the costs and time required were prohibitive. But this is quickly changing as enterprises begin to adopt highly distributed processing techniques, most notably MapReduce, a programming framework that has enabled Google, Yahoo, Facebook, MySpace, and others to process their vast data sets.

In its simplest form, MapReduce divides processing into many small blocks of work, distributes them throughout a cluster of computing nodes (typically commodity servers), and collects the results. Supporting highly scalable parallel processing, MapReduce is fast, cheap, and safe. If one node goes down, the lost work is confined to that individual node.

Google introduced the MapReduce framework in 2004, but there are many implementations today, including Apache Hadoop, Qizmt, Disco, Skynet, and Greenplum. Apache Hadoop is the leading open source implementation. Amazon taps Hadoop to offer MapReduce as an Amazon Web Service. Cloudera, which bills itself as offering "Apache Hadoop for the Enterprise," is making significant inroads.

Support for MapReduce programming is also delivered in several enterprise software products such as GigaSpaces eXtreme Application Platform, GridGain Cloud Development Platform, IBM WebSphere eXtreme Scale, and Oracle Coherence, to name a few.

The inexorable growth of data is a fact of life. As vendors drive the MapReduce framework into product offerings, we have a new window into what all those petabytes mean. It's difficult to imagine how, just 30 years ago, businesses could function without the benefit of business intelligence software or even spreadsheets. When MapReduce becomes part of the culture, business strategists in the not-too-distant future may look back on our era in the same way.

-- Savio Rodrigues

This story, "InfoWorld's Top 10 emerging enterprise technologies," was originally published at InfoWorld.com. For more on technology awards, go to InfoWorld.com.

Saturday, November 21, 2009

Top 10 Windows tools for IT pros

By Randall C. Kennedy | InfoWorld

They say you can tell a lot about a person by the tools they bring to the job. If you're a professional plumber or a carpenter, people will expect you to carry the right tools for the task at hand. The same holds true for IT pros. Those in the know will judge you by the depth and sophistication of the technical toolkit you bring to a support call.

To help you make a good first impression and to cement your reputation as a seasoned troubleshooting guru, I offer the following list of my top 10 must-have Windows utilities for PC support professionals. Some you probably already know. Others you may have heard of only in passing. But all deserve your consideration for a place in your PC support and diagnostics toolkit.

1. Sysinternals Suite
When Mark Russinovich and Bryce Cogswell embarked on their quest to fill what they perceived to be glaring holes in Windows' management and diagnostic capabilities, they couldn't possibly have known the impact their suite of tools and utilities would have on the greater Windows IT community. Yet here we are, a short decade later, and it's hard to imagine a world without Process Monitor, Autoruns, RootkitRevealer, and the rest of the Sysinternals Suite.

A lot has changed since those early days. Russinovich was eventually courted by Microsoft and now serves as a technical fellow overseeing the evolution of the Windows architecture (much to the benefit of users everywhere). Meanwhile, the Sysinternals Suite has been transformed from unofficial -- yet highly prized and admired -- collection of backdoor hacks into a must-have ensemble of fully sanctioned tools offered under the Microsoft TechNet banner.

There's not much else to say about Sysinternals except that, like all of the tools I've highlighted here, it's absolutely free. And no Windows professional should be without it. For shame!

2. HWiNFO32
What kind of CPU is in the PC you're using? Does it support hardware virtualization so that it can run Virtual Windows XP Mode under Windows 7? How old is the BIOS? Are all of its core hardware components running within normal voltage parameters? Are any components overheating or otherwise misbehaving?

If you can't answer these questions off the top of your head, you need a copy of HWiNFO32. A free, comprehensive hardware diagnostics and inventory tool, HWiNFO32 can tell you everything you need to know about the PC you're trying to support. CPU and chip set type/revision/stepping levels; runtime capabilities (VT, SSE, etc.); disk drive S.M.A.R.T. support/health; and myriad other details -- all are revealed when you run HWiNFO32.

To be sure, there are other hardware identification utilities. CPU-z, Everest, and SiSoftware Sandra all come to mind. However, none of them can match HWiNFO32's breadth of reporting and rock-bottom pricing (it's freeware). Next time you're in the field and need to know what's inside that flaky PC, don't just guess at the specs. Run HWINFO32 and know exactly what you're dealing with before you ever crack that case.

3. Crap Cleaner
Windows is an untidy beast. From orphaned registry entries to long-abandoned temporary files and folders, the process of running Windows makes a mess out of your PC's file system and software configuration parameters. In fact, Windows' tendency toward sloth has spawned an entire industry of tools designed specifically for cleaning up the detritus of daily life in a Microsoft world. And by far the best of the best is the appropriately named Crap Cleaner, aka CCleaner.

A combination registry and file system cleanup tool, Crap Cleaner makes quick work of those insidious configuration inconsistencies and accumulated temporary files and folders that can bog down system performance. It includes a variety of program-specific cleaning functions for common applications and directly supports cache maintenance functions for all major Web browsers. And for the truly paranoid among us, Crap Cleaner even offers a secure erase function to ensure that those deleted temporary files are gone for good.

Another must-have freebie, Crap Cleaner can breathe new life into what may have seemed like a terminally ill Windows patient. Before you reach for the Windows installation media to conduct a clean install, give Crap Cleaner a try.

4. Recuva
It's the support call we all dread most. That guy in accounting, the one who always misplaces his data files, has accidentally deleted another critical workbook. Now he's on the phone, screaming for help and expecting you to save the day.

Fortunately, there's a red cape and tights lurking in your toolkit. With Recuva, you can recover deleted files from almost any Windows-compatible storage media, including removable disks and thumb drives. It works by scanning the disk for orphaned file data and giving you the option of recovering what it finds.

Recuva can operate in "quick" mode, where it checks for deleted files in the higher-level disk structures, or in deep "scan" mode where it searches your media block by block. If you later decide that the file in question was really better off dead, Recuva can perform a secure delete operation to make sure that nobody else can recover the data anytime soon.

You'll want to keep a copy of Recuva handy for the inevitable panicked phone call from your accidental deleter. As with the other tools here, Recuva is completely free. There's even a portable version that you can run off of a thumb drive, so there's really no excuse to be caught unprepared when the inevitable occurs.

5. FileZilla
Spend any time supporting Windows PCs and you'll find yourself becoming an expert at FTP. That's because the File Transfer Protocol is the de facto way of distributing device driver updates and patches to Windows customers. And depending on the prevalence of specific makes and models, you'll likely begin to learn the directory structure of your primary vendor's FTP site like the back of your hand.

Fortunately, Windows users have access to one of the more powerful FTP clients available: FileZilla. Now in its third major generation, this capable if somewhat homely application makes transferring large quantities of files a snap. A capable queue management system helps keep the uploads and downloads humming along while a nearly endless selection of configuration options and protocol tweaks ensures that, whatever the situation, FileZilla will come through every time.

6. Virtual CloneDrive
Microsoft loves the ISO image format. From trialware versions of its OS products to its latest Patch Tuesday security fix collections, Microsoft distributes more software in the ISO format than any other vendor. So it comes as somewhat of a shock to learn that, as of Windows 7, the company still hasn't integrated a basic ISO image mounting/drive-emulation capability into its core OS. Basically, if you want to use an ISO under Windows, you need to first burn it to a physical CD or DVD.

Fortunately, a variety of free third-party solutions fill this void. My personal favorite is Virtual CloneDrive, from Elaborate Bytes. Simple to use and highly compatible with all versions of Windows, VCD makes it easy to mount and access an ISO image. Just right-click the VCD tray icon, select Mount from the context menu, and point the ensuing file open dialog to the desired ISO file. The image then appears as a virtual CD-ROM drive in Explorer, allowing you to access its contents like a real disk.

Note: I used to be a Daemon Tools fan. However, recent versions have proven unstable under Windows 7 (I've traced several Blue Screens of Death to Daemon Tools' kernel mode components), and the developer's lackadaisical attitude toward supporting new Windows versions has turned me off from this once class-leading utility.

If you regularly need to mount and access ISO images, make sure you take along a copy of Virtual CloneDrive. It's easy to use, installs in seconds (no reboot required), and provides the capability that Windows should be providing but doesn't.

7. Defraggler
It's a trend that many IT pros find disturbing: With each new release of Windows, the amount of control over and feedback from the built-in disk defragmentation utility has decreased to the point that now, with Windows 7, the process is almost entirely opaque. And while third-party defraggers have been available for Windows since the early NT days (Diskeeper qualifies as a national historical monument), they can be both costly and complex.

Enter Defraggler. Another freebie from the folks who brought you Crap Cleaner and Recuva, Defraggler is a Windows disk defragmentation utility that provides tremendous control over the entire defragmentation process. With Defraggler, you can defragment individual files, folders, or an entire disk. Need to speed up access to that massive database file? Defraggle it. Want to improve performance for newly created files? Defraggle your disk's free space.

With Defraggler, you can even specify where specific files get stored on the disk, moving larger files that are accessed infrequently to the "end" of the disk media and smaller, frequently accessed files to the "front." A scheduled, background execution option and portable version (so you can take it with you when you're in the field) round out a surprisingly rich feature set for a free tool.

8. ImgBurn
Mounting ISO images is a great time saver. You get to save money on physical discs, and you don't have to wait while they're being prepared in your CD/DVD-write. But sometimes you just need to burn a physical CD/DVD -- for example, when installing a new version of Windows or distributing a custom disc image. And while there are myriad commercial solutions to allow you to author your own media masterpiece, most are overkill when it comes to basic disc burning tasks.

Thankfully, there's ImgBurn. The gold standard for freeware image burning tools, ImgBurn lets you transfer virtually any kind of ISO image to virtually any kind of writeable optical media. ImgBurn's simple, straightforward interface makes quick work of most common burning tasks, while its more advanced modes let you handcraft your own disc image, including defining the disk format, directory structure, and numerous other custom parameters.

There are other free utilities out there for burning discs (Alcohol 52% comes to mind). But none of them provide the level of flexibility or control over the burning process that ImgBurn does. Definitely a must-have addition to your IT support toolkit.

9. 7-Zip
Zip archives are a fact of life. Whether you're downloading an installation program from the Web or merely passing that bulky PowerPoint deck to your colleague in marketing, compressing files saves you time, space, and network bandwidth. Though Windows has included basic zip file support for years, its capabilities have proven anemic at best.

7-Zip is an open source archiving utility that improves upon Windows' basic capabilities, adding support for additional formats -- including its own, high-ratio compression algorithm -- while remaining lean, mean, and true to its original purpose.

Unlike more elaborate utilities, like the shareware WinRAR archiver, 7-Zip doesn't pretend to be all things to all people. You won't find ISO extraction or other esoteric functionality in 7-Zip -- just a basic archiving functionality that is secure, reliable, and fast. If you're looking for a beefier alternative to Windows' built-in zip capability, but don't want the hassle of dealing with the overly complex, "nagware" behavior of the well-known commercial offerings, check out 7-Zip.

10. VirtualBox
Troubleshooting a complex OS like Windows can be a challenge. Sometimes the only way to effectively diagnose a problem is to reproduce the entire runtime environment. And the only real way to do that, short of building a new physical PC, is through virtualization.

In the IT support and help desk space, VMware Workstation has long ruled supreme. However, the combination of high per-seat costs and an increasingly complex feature set is causing many IT pros to gravitate to the simpler, yet deceptively powerful VirtualBox from Sun Microsystems. Even better, an open source version of VirtualBox is available free under the GPL; compared to the commercial version, it's lacking only USB support and an RDP server that lets you connect to guests remotely.

Though not as full-featured as VMware Workstation -- for example, there's no equivalent to Workstation's Easy Install feature for Windows guests -- VirtualBox installs quickly, requires very little study, and includes all the essentials. It supports Windows, Linux, Solaris, and Mac OS X hosts and guests (including 64-bit editions), Intel VT and AMD-V hardware virtualization extensions, OpenGL and DirectX 3D graphics acceleration, VMware and Microsoft virtual hard disk formats, and of course, stateful VM snapshots.

So while VirtualBox may seem less sophisticated than VMware Workstation on the surface, the truth is that VirtualBox 3.8 delivers tremendous virtualization power. And like all of the other utilities here, it's available absolutely free.

This story, "Top 10 Windows tools for IT pros," was originally published at InfoWorld.com. Follow the latest developments in Windows at InfoWorld.com.

Monday, October 12, 2009

long time no see.....

Hasn't realli got time to come and talk...

today sibei bad mood although I also dunno why...why am I so down on lucky when I am supposed to be very lucky? I hate my personality...why am I so anti social?

anyone wanna rent a room in punggol? please let me know...

Saturday, October 10, 2009

HTC Hero

finally got my HTC Hero.this is truly a better phone than iPhone. I am blogging this post via my HTC Hero using AndroBlogger

Friday, April 24, 2009

Monday, April 13, 2009

2nd hand furniture

http://hocksiong.com.sg

http://www.2ndlifefurniture.com

Oil Painting....



Want to buy these 2 painting...anyone can help??