Whatsapp – Email overload & Enterprise social networks!

The last few years I have been really getting acquainted with several companies in my line of a work as tech sales guy. I see several organizational problems that I see building up due to unfettered tech and its wrong usage.  I’d love to solve them and have a few suggestions and being who I am they are inevitably about the more adoption of technology! I mean hey Whatsapp just got bought for like $10 +billion and people have a huge opinion about it. But all it did there was just move text messaging from the voice on CDMA+GSM spectrum network to the Internet Protocol/data network on the GSM+CDMA spectrum and added a layer of take and share media and files, create private friends or family social networks and on the mobile!

Email overload at the workplace is the latest big thing now that I really want to handle – it is so current now that were you to type in that search word – damn there’d be an article an hour back like say this one when I started to type this out. The solution is not too tough really – all you have to do is move that traffic else where on other servers that can handle it and is not as crowded and with less noise.

Go Yammer  and the ESN way. ESN is something I think I coined if no one has used before: enterprise social networks. If you don’t want to use Yammer – you may as well use FB – create corporate FB Ids centrally etc. But the point is it is they are there and can be used. You could try moving to the sales force dot com platform and use that as your workflow and sales force automation systems also and create that process. People like it or not read their FB messages, twitter feeds, Whatsapp dialogues more than anything else. More than SMS perhaps! There is even bug report on mozilla asking to see if reddit can be used to replace yammer?

Also the medium makes it such that people are forced to stay on script; brevity is the soul of wit as said Polonious in Hamlet  – a Shakespearean play. People need to have to be brief by the urgency of the medium and the forced restriction of space. Occasionally it spawns it’s own language like BRB, IMO, IMHO; LOL; BTW! Plus it brings in the concept of presence ; you know who is logged in on a server somewhere on the internet and can send receive files etc!! There are also possible videoconferencing and tele calling available which makes ESN the best way to go to handle email overload. In a very Marshall Mac Luhanesque manner; the medium becomes the message! Also it is open source capable and can be mostly configured for free with admin levels rights requires payments but it gets the work done!

Advertisements

Digital Marketing, internet advertising and social media.

I have been drifting the last few months and my blog posts show! One of the things I drifted into was a small commission to manage a digital marketing effort at a startup social gaming/entertainment network based out and targeting south east asia pacific rim markets.

Digital Marketing today is a fairly evolved and complex set of activities in the visually connected and engaged space of the consumer’s mind. At one level it is simple Internet banner ads, networked digital signage but at another it is also complex socially networked media, people focus, crowd engagement, the collective user experience, blogospheres, sharing. At all levels it is pure and simple advertising, subtly or not you are trying to drive a message through and cause action; make people buy as it were. In the digital world – it is about increasing your “like” counts. Somehow like it or not – our lives and as are companies / organizations also – are being defined by the lingua franca of social networks like Facebook, Friendster, MySpace and many others before that.

A key point in the development of the digital advertising space was the design and implementation of search engine term monetization by Google – very simply put AdWords and the entire bidding related marketplace that it has spawned. Acquisition meant control over Doubleclick – a pioneer ad serving company in 2007 and before long the empire also had interest in mobile ad serving with Android and AdMob – another well-timed acquisition in 2009. For advertisers it meant a sort of implied granular control of their ad money spend. I insist “implied” because after one level – it all becomes blind and you really do not know what is going on. Some others have also opined against it and here is one.

Advertising the mainstream variety has always been inherently a “dark art”. A famous quote I recall was that “…half of my ad budget is wasted – I just don’t know which half…” You made a good ad, you bowed to creative urges and flashes and then you spend on media – a lot like shooting many arrows and hoping some of them stick. Internet advertising in the olden days was a lot like that – you had banner ads and you hoped some of them got seen. Then they did pop ups and pop unders and things got crazy for a while and standards got set up. A little bit of the relevance of this type of advertising sank with the development of ad blockers. Before long banner ads that ran java were also proving to be an effective malaware vector to infect using drive by downloads.

At a time like this search engine keyword/term advertising that Google brought in set the industry standard. Advertisers were intrigued and the prospect of granular payments for the number of clicks and clicks throughs set the mind of the marketer afire. Today about 20% of all advertising budget is thought to be going into internet advertising. and some estimates say that soon it will be 50% and one forecast has it that it will outstrip print advertising by 2015. Google’s system was imitated by the rest and anyone that has a search engine today monetizes it thus, including Bing – the other search engine. The system works pretty simply – you quote a price – a “bid” as it were for a set of keywords that best suit your product or service and that the average user would use to search for things related to your product and service. Every time that word, search term was used and your bid was not lower than everyone else’s your ad would be shown next to the search results. Google will help you with a keyword generator, a suggested bid that is an approximation of the bids and well you need a script that is ideally 100 odd characters. You can also use the search terms generated to advertise on internet banner ads on desktops and mobile devices inside apps, on the web through a browser. The advertiser gets charged every time some one clicks through one of the ads.

Like traditional advertising however – the guy with the most money is usually the one whose message gets out better and louder. The brands that are deemed the most social inevitably are also the brands that have the largest ad spends in the “offline”, traditional advertising space. Things get different when you realize that it is a self service “platform” – you log in to a web site which gives you a fairly complicated but tightly laid out options and mechanisms for your ad. There is telephonic support and you dial a toll free number to talk to someone who can guide you through the process and help you spend. There are of course agencies that will help you do this and increasingly they are traditional advertising agencies that develop a separate department or perhaps acquire some hot startup. But the beauty is the egalitarian nature, anyone with a credit card can advertise. This has specifically appealed to the small and medium business cluster and has unlocked advertising dollars that were not being spent.

This in a way is the early stage of internet advertising age, which itself cannot yet be called a long phenomenon. This is the stage before social came in to the picture and this is when one web site built an audience in the 100s of millions that soon become a billion. The good news was the audience gave the web site their personal details, location, preferences; you name it – a deep insight as it were into who the users were and well what they were doing. Facebook was suddenly the wonder child of the new internet generation with a young founder CEO and a dashing style built on openness and a Hollywood credo. When they finally rolled their advertising mechanisms out they offered unbelievably fine-tuned targeting of their user base without actually jeopardizing anonymity. On Google you advertised on Google’s and other people’s sites when you did banner ads or on Google’s or other people’s apps. The inventory of ad server space from the non Google Apps and Sites was more often than not sold between all the other ad server suppliers and there were quite a few there as even a survey and listing from 2010 will show here but on Facebook that inventory is all theirs. There are page posts, promoted posts, likes, installs, display on web site, on phone of android or iOS, everything they own, it is their ad inventory. It is also a bidding style self service engagement but there is a very deep level of granular segmentation possible that you can actually build a very strong motivated followers from the likes. Facebook also allows you a nice medium to show your wares through a Facebook Page – which is like a digital storefront at one level. The strength of the still new platform is that it can drive app installs at a well nigh unprecedented rate as has been seen.

Cognitive Bias in Software Testing

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, leading to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality. Implicit in the concept of a “pattern of deviation” is a standard of comparison with what is ‘normally’ expected; this may be the judgment of people outside those particular situations, or may be a set of independently verifiable facts. In well-run software development projects, the mission of the test team is not merely to perform testing, but to help minimize the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It’s important to recognize that testers are not out to “break the code.” or to embarrass or complain, just to inform as meters of product quality. The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item. The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.

In the traditional software development model – often called the waterfall – everything flows from a process and one step starts after the other. Testing is at the end of that process and is often looked at less than essential. In software development companies therefore the testing that happens in-house more often that not is seen more as a rigor than as a part of the project flow. It is billable and some developer organizations that are committed to systems and processes – try to do a good job of it in the name of QA and develop a defect free product in the first release. But it is the nature of the beast that there can never be only a version of a product. Things change and something change for the better, the operating system undergoes upgrades in the way that it works and the application needs be updated to ensure that the newer things are all built are now incorporated calling for new round of development and a layer of testing and the process often repeats itself.

But can people in the same organization who are organized in a construct towards getting the release out quickly and fairly defect free be the best judge of quality? In a seminal study on human computer interface called “Positive test bias in software testing among professionals: A review by Laura Marie Leventhal, Barbee M. Teasley, Diane S. Rohlman and Keith Instone at the Computer Sciences Department, Bowling Green State University Ohio, the researchers found a case for ample evidence that testers have positive test bias. This bias is manifest as a tendency to execute about four times as many positive tests, designed to show that “the program works,” as tests, which challenge the program. The researchers cited found that the expertise of the subjects, the completeness of the software specifications, and the presence/absence of program errors may reduce positive test bias. Skilled computer scientists invent specifications to test in the absence of actual specifications, but still exhibit positive test bias.

Another study  “Confirmation Bias in Software Development and Testing: An Analysis of the Effects of Company Size, Experience and Reasoning Skills” by Gul Calikli; Berna Arslan; Ayse Bener at the Department of Computer; Engineering, Software Research Laboratory, Bogazici University, Turkey Results showed that regardless of experience and company size, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Education and/or training programs that emphasize mathematical reasoning techniques are useful towards production of high quality software. In order to investigate the relationship between code defect density and confirmation bias of software developers the researchers performed an experiment among developers who are involved with a software project in a large-scale telecommunications company and analyzed the effect of confirmation bias during software testing phase. Their results proved that there is a direct correlation between confirmation bias and defect proneness of the code. Their concluding summary shows that there is no significant relationship between software development or testing experience and hypothesis testing skills. Experience did not play a role even in familiar situations such as problems about software domain. The most striking difference was found between the group of graduate students and software developers and testers of the companies in terms of abstract reasoning skills. The fact that students scored better in software-domain questions although most of them had less software development and testing experience indicates that abstract reasoning plays an important role in solving everyday problems. It is highly probable that theoretical computer science courses have strengthened their reasoning skills and helped them to acquire an analytical and critical point of view.

Hence, we can conclude that confirmation bias is most probably affected by continuous usage of abstract reasoning and critical thinking. Company size was not a differentiating factor in abstract reasoning, but differences in hypotheses testing behavior was observed between two groups of companies grouped according to their sizes. The large company performed better in the interactive test, but it has been shown that the group of students outperformed this group in terms of both tests.

This has led to the conclusion that hypotheses testing skills were better in the group of students. There is a relationship between confirmation bias and continuous usage of and training in logical reasoning and critical thinking. The relevance of this in current day trends like crowd sourced testing are structured attempts at making this happen in real time over larger and wider deployments.

There are several kinds of biases that the average humans are exposed to and commits but in the business of software development testing each on poses its own challenges and the astute tester must guard watch for it and compensate for it into the test design. The importance of making testing therefore an independent area and outsourced differently from application development is therefore strategically very important. A brief listing of some of the biases are listed as below with reviews from some of the experienced independent software testing community thought leaders are listed.

(1.) Observational Bias happens when one only look where they think they will find positive results, or where it is easy to record observations and a little like looking for something lost only under the streetlight! Darren McMillan, an independent software testing consultant from Glasgow in his Requirements Analysis & Testing Traps rightly points out the dangers of having visual references (wireframes) at a very early stage in the project lifecycle that could take your attention away from something more fundamental within the text of the requirements themselves.

(2.) Reporting Bias – a tendency to under-report unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error. Over time, reporting bias can lead to a status quo where multiple investigators discover and discard the same results, and later experimenters justify their own reporting bias by observing that previous experimenters reported different results. A valuable piece of information can be skewed to make a problem seem less severe (e.g. <1% of our customer base use *that* browser so can’t do XYZ).

(3.) Survivorship Bias a type of selection bias the logical error of concentrating on the people or things that “survived” some process and inadvertently overlooking those that didn’t because of their lack of visibility. This can lead to false conclusions in several different ways. The survivors may literally be people, as in a medical study, or could be companies or research subjects or applicants for a job, or anything that must make it past some selection process to be considered further. Survivor ship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes in a group have some special property, rather than being just lucky. For example, if the three of the five students with the best college grades went to the same high school, that can lead one to believe that the high school must offer an excellent education.

(4.) Confirmation Bias is a tendency of people to favor information that confirms their beliefs or hypotheses and display this bias when they gather or remember information selectively, or when they interpret it in a biased way. People also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization and the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations. This particular bias is the big daddy if all biases since it has so many variations. Michael Bolton an independent software testing consultant and Principal, DevelopSense and Co-author (with James Bach) of Rapid Software Testing from Toronto, Canada   provides some really useful tips for escaping confirmation bias in his book.

(5.) Anchoring Bias or focalism is a term used in psychology to describe the common human tendency to rely too heavily, or “anchor,” on one trait or piece of information when making decisions. During normal decision making, individuals anchor, or overly rely, on specific information or a specific value and then adjust to that value to account for other elements of the circumstance. Usually once the anchor is set, there is a bias toward that value. Michael D. Kelly a testing veteran from Indiana in the US talks about simply sketching out a schematic of sorts and talk through his ideas (not necessarily solutions). It just could be a  “Talk it through with your mates” heuristic?

(6.) Congruence Bias occurs due to people’s over reliance on direct testing of a given hypothesis, and a neglect of indirect testing. It is a kind of Confirmation Bias as mentioned earlier. Pete Houghton, Peter Houghton a Contract Tester at the Financial Times, London opines on the Arrogance of Regression Testing “We stop looking for problems that we don’t think are caused by the new changes.” claims Pete.  And there are, many others such as Automation Bias, Assimilation Bias etc… there’s quite a lot of cognitive biases out there, and you may wonder how testers even get out of the starting blocks with so many possible ways for their judgment and work to be skewed.

Why the future of the smartphone OS may not be Android?

Andrew Rubin was a manufacturing engineer at Apple between 1989 and 1992 and later moved to Microsoft, started Danger and then finally developed the Android smartphone operating system between 2003 and 2005. Apple thinks this is a valid point in their cases against HTC and Android points and he must have gotten the inspiration then as they have stated in court filings. Whatever else the marketplace for the future smartphone OS is going to be it is sure going to be acrimonious and there will be blood letting. I think Symbian and then in parts Nokia as it stands now are some of the early victims. There is an opinion among the thought leaders like Gartner, Forrester, IDC that Android is going to be the mainstay of the future smartphone OS business. They maybe missing a critical point in that the OS at some levels is nothing but a rendition, a take off on the Apple iOS. It does not really do anything any better but by design does a few things like security all that much worse.  If the market is to be a Darwinian survival of the fittest game – then there are some very essential character flaws that Android can never hope to recover from.

For one it is much litigated against with both Android and original device makers like Samsung having several legal cases currently on at different parts of the world and there must be some kind of cost on that? From thence comes the next problem and that it is not really free and that at the end of the day thanks to all the litigation a developer needs to pay royalty to of all the people Microsoft. As a matter of fact it was recently reported that Android is a pretty large revenues stream for MS in the balance sheets. The irony as CNET says is enough to make your head explode and MS makes more money on Android than they do on their own smartphone OS!!!! This is not exactly going to get easier and then there is Dalvik and the Oracle suit, which has not been going particularly well for Google at the moment. To add to the mix we now have Motorola and that must be taking some getting used to. As long as Google just made the OS – there were enough original device makers (ODMs) like Samsung, HTC, LG and the rest to make devices around the OS as it was ostensibly free but once you put Motorola into the mix which Google bought – then why? At many levels they are now competitors aren’t they now? Google’s of course stated that Motrola will enjoy no advantage but documents leaked seem to indicate (as all corporate hypocrisy inevitable does) the exact opposite. An immediate result has been that both Samsung and HTC are looking at other options and with the shuttering of HP’s device strategy they have a Web OS that they can play with and yeah – no one forget Microsoft or RIM with QNX – the other two elephants in the room?

Today the reality is that Android is the largest OS out there for smartphones and tablets but will that edge always remain and can Google do something about it? Besides the environmental issues of patents and lawsuits and royalties – there are several technical issues too. For one there is the entire version fragmentation that ensures that there is no single user interface out there, or a single user experience. There is a 2.1 and there is a 1.6 and then there are the ice cream sandwich and then there is a tablet version and even that has two sub versions and then where the carrier companies in markets like the US control the OS on the system – it is a whole new story out there. At one point Google had to clamp down on the openness and all that once could do with the OS to ensure that there is a semblance of a system there. Most of the competition has only one version out there and the latest version effectively updates and relegates the older version to the dustbin. The incompatibility that the Android version issue creates when it comes to apps in their marketplace is something that has bothered several developers and will possibly continue to haunt them. The Android marketplace – their version of the App Store from Apple but with none of the controls or quality filters is one of the other problems with the OS. It has proved to be a very effective infection vector for a whole host of mobile malaware and attacks. Android is always going to be a security nightmare and there is really no way that they will ever get on top of that in any easy way. The issue is not just limited to the platform – developers report extreme amount of piracy which further will work to erode confidence.

So you basically have an OS that is being developed by a company that already has one another OS (Chrome) and it is getting beaten and sued, not making any direct money but making money for the competition directly and has several technical issues and the people who used to makes phones on them are now either making or buying or consorting with our license based OS and this company will keep developing that OS? Then there are the 3rd party developers – can they take this cost and can they keep developing given that this is not the most profitable platform to develop on as recent surveys seem to show?

Social CRM – an user guide

Barack Obama – the current president and chief executive of the United States launched his re election bid at Facebook headquarters. On the 20th of April he will be addressing people on the issues facing the American economy there. If that is not one of the strongest endorsements of the importance of social networking and Facebook in particular – nothing else is! It also is one of the best ways to engage with the polity, the market, the stakeholders in a two way communication in real time. Obama knows this and has been one of the best users of this technology when he came about the first time around. Today two-thirds of the world’s Internet population visits social networking or blogging sites, accounting for almost 10% of all internet time, according to a new Nielsen report “Global Faces and Networked Places.” If data captured from December 2007 through December 2008 is any indication, that percentage is likely to grow as time spent on social network and blogging sites is growing more than three times the rate of overall Internet growth. “Social networking has become a fundamental part of the global online experience,” commented John Burbank, CEO of Nielsen Online. “While two-thirds of the global online population already accesses member community sites, their vigorous adoption and the migration of time show no signs of slowing.”

Global growth in Facebook numbers.

 There is so much buzz about social CRM these days. Companies like Intuit, Procter & Gamble and Citigroup have embraced it in a big way. Gartner is now devoting magic quadrants to it and a slew of companies have raced into this emerging field. According to Gartner, social CRM – the integration of CRM with the organization’s social media exchange will be a $1 billion subsector of the CRM market by the end of this year. The various sites, blogs and communities that comprise this arena represent the fastest growing areas of the Internet. Further, it now reaches more people than email, according to Nielsen Online.

What then is Social CRM and what are the components? A little while ago Chess Media Group  in collaboration with Mitch Lieberman, developed the following image which I believe is a great starting point for visualizing Social CRM within an organization — a sort of Social CRM “map” if you will.

the Social CRM Process

If you look at the image  you will see that this is about a flow of information and it flows in the following way:

  • The community provides feedback via offline or online channels.
  • If the channel is online then it is monitored and picked up by a “listening” tool, which then integrates with a CRM system to provide customer information. If the channel is offline then it goes directly into a CRM system.
  • The information collected is automatically routed to the proper person in the proper department (several vendors are in the process of working on this to make it happen, others have some form of this developed already).
  • Once the person receives the information they can decide how to respond which will either be a macro response (public) or a micro response (private), or both.
  • The response is funneled through business rules which will dictate how and where the response will take place.
  • The response is once again captured by the CRM system so that the record is complete.

Paul Greenberg, Author of “CRM at the Speed of Light” says that “Social CRM is a philosophy & a business strategy, supported by a technology platform, business rules, workflow, processes & social characteristics, designed to engage the customer in a collaborative conversation in order to provide mutually beneficial value in a trusted & transparent business environment. It’s the company’s response to the customer’s ownership of the conversation.”

Brent Leary a noted blogger in this space compares traditional CRM and Social CRM: “And with multiple people “touching” the customer for various reasons, it quickly became important to be able to track activities, appointments, potential deals, notes, and other information.  Consequently, traditional CRM grew out of this need to store, track, and report on critical information about customers and prospects.

The points of social interaction

Social CRM is growing out of a completely different need — the need to attract the attention of those using the Internet to find answers to business challenges they are trying to overcome. Businesses began investing in CRM applications in the ‘90s mainly to store contact data.  Before contact management software was available, businesses had to store their valuable customer information in Rolodexes, spreadsheets, and even filing cabinets.  It was important to have a central location to store the data that was also easily accessible to communicate effectively with contacts.  And with multiple people “touching” the customer for various reasons, it quickly became important to be able to track activities, appointments, potential deals, notes, and other information.  Consequently, traditional CRM grew out of this need to store, track, and report on critical information about customers and prospects.

Nothing captivates the attention of searchers like relevant, compelling content.  Having the right content, and enough of it, will help connect you with those needing your product or service.  Creating content in formats that make it easy for your target audience to consume it increases the probability that you will move them to action — starting a conversation with you.  Whether it be by developing a blog post, podcast, YouTube video, or Webinar, creating attractive content is a key pillar of social CRM strategy. Social CRM is a part of social business that helps companies make sense of (and then act on) the data they collect from social customer interactions. 

The Use Cases for social CRM - why we need it?

Already, hundreds of vendors populate this space. Some offer little more than a widget while others attempt to roll social CRM in with traditional customer relationship management (CRM) or call center tools. Some call it social media monitoring or add some data mining tools and call it social media analytics.

According to Gartner analyst Adam Sarner, those who are currently ahead of the pack are Jive Software and Lithium Technologies. Jive offers collaboration software, social media monitoring and community software under the umbrella of its Jive Engage Platform. It recently added four Jive Apps Market features to enable faster development of social media tools on this platform, ease of purchasing, simplified billing and enhanced revenue sharing capabilities. “Jive Apps Market will change the economics of how developers market and sell the next generation of social business applications,” said Robin Bordoli, vice president of product management for Jive Apps. Lithium offers a similar suite with an emphasis on finding and engaging your most supportive customers. It ties together Facebook, Twitter, the social Web and branded communities.

Just about everyone is getting into the race these days. SugarCRM has added social features, but allows users to decide how they leverage social data and channels inside the Sugar system. For example, SugarCRM users can now monitor Twitter streams of contacts or accounts, as well as uncover leads and relationship data from networks like LinkedIn and place it into the CRM record. “The idea is not to limit users with prescribed notions of social CRM interaction management, but to provide simple tools for leveraging social channels and data to foster better interactions with customers,” said Martin Schneider, senior director of marketing at SugarCRM. “In social CRM, Oracle and Salesforce.com have potentially the deepest pockets, so it is no surprise that they have been talking up social features for some time.” However, he thinks that rigidity is present in their systems, as they are based either on older software code or they are lacking open multi-tenant features. That’s why he agrees with Gartner that Lithium and Jive are far ahead of both of these providers when it comes to inbound or outbound collaboration and community building. Schneider also likes RightNow’s approach, which focuses on customer experience rather than core collaboration or social media marketing tools. “We have not seen much social CRM talk out of Microsoft either,” said Schneider. “Microsoft Dynamics CRM is still finding its legs in terms of traditional CRM, cloud-based deployment and a channel-led go-to-market strategy, so social might just be on the back-burner at this time.”

The suitability & adoption of the use cases.

What are the key components then of Social CRM? Carie Lewis, Director of Emerging Media, Online Communications, Humane Society of the United States says that “..Listening is the first step in social media. You have to listen to what others are saying about you before you jump into the fire. Listening will tell you what people are saying, and where they are saying it, so you know where to get started….” In her blog she suggest tools to do this free for the NGO. Many of these tools are Twitter-focused, because Twitter is the easiest place to get started in listening.

There are today technical solutions both free and paid that aid the ‘listening’. A list of 9 social media tracking and monitoring tools are brief discussed here. Some are paid-for and some are free. Many can be used together, and some integrate with others to maximize efficiency, tracking and response time. Enjoy, and do let me know of others you think should be on this list.

Radian6 has a flexible dashboard that enables monitoring all kinds of social media with real-time results. Radian6 helps you to identify influencers, measure engagement, and determine which conversations are having an impact online. One great feature is the ability to identify an opportunity and, send it directly to the person who should respond. This is the hottest company in this space and was recently acquired by Salesforce.com for $326 million.

Meltwater Buzz This is also a paid social media-monitoring tool to monitor blogs, social networks, forums, etc for brand monitoring and tracking. Meltwater enables sentiment tracking, geographical monitoring and keywords. The interface for Meltwater is great, and they allow for geo-tagging and analytics.

SocialCast A paid for enterprise collaboration tool that connects your companyʼs data, people and resources in real-time, much like how Facebook updates. It makes information management and collaboration easier through; micro-blogs, activity streams, groups, calendars, employee profiles, etc.

Salesforce Chatter For those of you using SalesForce.com here is a great addition to support Sales and Marketing alignment. Chatter (beta testing now) promises to help you connect and share in real-time by way of live feeds, micro-blogs, groups, employee profiles. It acts much like these other enterprise collaboration tools, but it’s heavily integrated with your CRM and related activities. Expect to see some massive integration with the Radian6 suite mentioned earlier and now acquired by salesforce.com

SocialText A company out of Palo Alto sas some paid and some free versions of its tools. Like the collaboration tools above, social text incorporates micro-blogging, wiki workspaces, blogs, groups and social networking to improve and enable enterprise collaboration.

Google Alerts Google alerts are free email alert updates of the latest relevant Google search results for web, and news based on your choice of keywords, or topics. You can choose to receive alerts daily, weekly, or as it happens.

Twitter Search is Twitter’s search module and it is free, and allows you to search keywords or hash tags in real-time and get a live feed of status updates. You can also search by location. Twitter search really makes it easy to pull real-time results, at anytime. Most 3rd party twitter tools, such as hootsuite, have integrated the Twitter search into their application, so it can all be done in one interface.

Hootsuite a Canadian company this letʼs you monitor your brand and other searches, schedule tweets, integrates with other social networks, lets you tract statistics and enables team workflow so you can manage multiple accounts, and multiple users.

gURLs Genius URLs are an easy way to track your social media conversations back to revenue. By using these shortened urls in your tweets, blog posts, and facebook fan pages, you can see the engagement history of people once they convert from anonymous to known visitors or prospects on your site. This information can help you better score and nurture your prospects.

What is Social IRM? The Ad Agencies take (influence relationship management) and the Brand view.

Social IRM, a construct from Ogilvy Digital is the discipline of managing relationships between influencers and brands. It’s built on the principles of social media – respect, trust, and a true value exchange between brand and influencer. The goal of Social IRM is to activate genuine word of mouth online at a scale that can positively impact business.

John Bell at Ogilvy Digital, an agency around matters social opines in his blog that Social Media is not a channel do not start jamming content or ads down customers throats without understanding the ‘value exchange’ necessary to earn the customers attention and participation.

Ogilvys Social IRM

Social media after all is not just a growing collection of technologies allowing all to communicate, share, create and publish but also for people who have discovered a new platform to express themselves and form connection and interactions with others who share some affinity no matter however niche

Ultimately, everything social media enables is a new form of word of mouth and word of mouth trumps most other forms of communication in influence on many purchase decisions and opinions. For brands therefore social media is an imperative to embrace in any way they can the power of word of mouth.

The objective ought to be to help customers, enthusiasts’, fans, “strangers with expertise” share about products and the topics and ideas that bring together customers and the brand. “..We want people to search in Google and find the endorsements of our advocates – third parties who say our products are good because they are. Who are these third parties and how do we engage them so that they will authentically want to share? They are the new influencers….” Bell says. Today management of the customer relationship with those influencers is what is driving the adoption of Social CRM.

Apple iOS development business opportunity.

Bank of America Corp. and Citigroup Inc. are considering whether to let employees use the Apple Inc. phone as an alternative to Research In Motion Ltd.’s BlackBerry for corporate e-mail, said three people familiar with the plans. The banks are testing software for the iPhone that’s designed to make it secure enough for company messages, said the people, who didn’t want to be named because the plans aren’t public. The tests are the latest sign that RIM may be losing its tight grip on the corporate smartphone market. Companies are experimenting with alternatives, including the iPhone and tablet devices that use Google Inc.’s Android software, as their workers adopt those smartphones for personal use.

This represents the new emerging opportunity area of application development for handhelds in general and Apple’s handheld operating system – the iOS which is used to run their smart phone, tablet, music player hardware architectures. The main application of the suite is the integrated development environment (IDE), also named Xcode. The Xcode suite also includes most of Apple’s developer documentation, and Interface Builder, an application used to construct graphical user interfaces.

The Xcode suite includes a modified version of free software GNU Compiler Collection (GCC, apple-darwin9-gcc-4.2.1 as well as apple-darwin9-gcc-4.0.1, with the former being the default), and supports C, C++, Fortran, Objective-C, Objective-C++, Java, AppleScript, Python and Ruby source code with a variety of programming models, including but not limited to Cocoa, Carbon, and Java. Third parties have added support for GNU Pascal, Free Pascal, Ada, C#, Perl, Haskell, and D. The Xcode suite uses the GNU Debugger as the back-end for its debugger. Among the features of the Xcode suite is the technology to distribute the building of source code over multiple computers. The original, now called Shared Workgroup Build, uses the Bonjour protocol to automatically discover computers providing compiler services, and the free software distcc. More recent versions of Xcode added a second system, called Dedicated Network Builds, which scales better to larger configurations.

Because of modifications to GCC by Apple, Xcode can build universal binaries which allow software to run on both PowerPC and Intel-based (x86) platforms. Furthermore, the modified GCC can build 32- and 64-bit applications for both architectures. Using the iPhone SDK, Xcode can also be used to compile and debug applications for iOS that run on the ARM processor. Xcode also includes Apple’s WebObjects tools and frameworks for building Java web applications and web services (previously sold as a separate product). As of Xcode 3.0, Apple dropped WebObjects development inside Xcode; WOLips should be used instead. Xcode 3 still includes the WebObjects frameworks. As well, Xcode includes DTrace, a dynamic tracing framework created by Sun Microsystems and released as part of OpenSolaris. In Xcode, DTrace is used in the GUI tool Instruments.

Details of the developer program are pretty simple and cost $99 (Rs. 5000 max) for a company to enroll that gives them access to resources and training to develop on this platform for one year. The enrolment procedure is herewith pricing and pre requisites:

For an average sized software company with some to bare minimum experience of development on handheld device OS like Windows CE and Palm Pilot , it would cost Rs. 2 lakhs to set off the first year to start to build out the practice with hardware and registrations costs. This includes cost of retraining existing manpower.

The following are applications being built and a brief look at all of them to see what are the possibilities of application development for iOS as below:

Augmented reality Apps:

The future belongs to augmented reality. With Apple iPhone applications you can create a virtual world giving you all that information that you need with just your fingertips. These are also based on the location based tracking facility of the handheld devices.

Business Apps:

Apple iPhone applications not only help you entertain yourself but are highly useful for business transactions as well.

– Search Engine on your iPhone

– Accessibility to various Business reports, surveys, trends

– Email textual content to you mobile

– Money Management Tools

– Customer Detailed Database

– Calendar Services

– Windows Office Services

Entertainment Apps:

Entertainment is one of the most important aspect related to iPhone and it never lets you get bored with amazing application development possible for iPhone.

– Radio Stations

– Movie Feedbacks

– Music

– Information about local events

– Cartoon Characters

– Fun and Interactive Applications

– Location based Applications

Games Apps:

Playing games is much more fun on iPhone with its wide screen and fully touch sensitive display. This way you can enjoy high quality games like:

– Brick games

– Puzzles

– Quizzes

– Strategy games

– Board games

– War games

There are many more possibilities with Apple iPhone applications as you can develop apps for sports, news, weather, medical, education and what not. There is no dearth of options and requirements and application development for iPhone has answers for all of them.

How Malware works.

Malware is a general term for malicious software, and it is a growing problem on the Internet. Hackers install malware by exploiting security weaknesses on your web server to gain access to your web site. Malware includes everything from adware, which displays unwanted pop-up advertisements, to Trojan horses, which can help criminals steal confidential information, like online banking credentials. Malware is increasingly distributed through web browsers. This tactic has become more common in recent years, as email filtering made it more difficult for attackers to distribute their programs through email spam. Additionally,as firewalls have become more prevalent in the workplace and at home, malware can no longer easily spread from system to system over a network. Through the web, there are opportunities for hackers to penetrate your company’s website and use it as a host to spread malware to your customers. Malware code is not easily detectable and may infect consumers’ computers when they simply browse your web site. This is known as “drive-by” malware, and users are largely (or completely) unaware that their systems have become compromised with this type of attack—making it a particularly insidious problem. Hackers use drive-by malware to spread viruses, hijack computers, or steal sensitive data, such as credit card numbers or other personal information.

How drive-by malware works, and are small web sites at risk?

Drive-by malware downloads itself onto a user’s system without their consent. Cybercriminals exploit browser and/ or plug—in vulnerabilities to deliver the malware by hiding it within a web page as an invisible element (e.g., an iframe or obfuscated javascript) or by embedding it in an image (e.g., a flash or PDF file) that can be unknowingly delivered from the web site to the visitor’s system. Any web site is at risk. Small sites can be more vulnerable because they are less likely to have the resources and expertise needed to detect and rapidly respond to attacks. Malware may infect your customers’ computers when they simply browse your site. Targeting web sites with low traffic allows hackers to avoid detection longer and cause more damage.

To infect a computer through a web browser, an attacker must accomplish two tasks. First, they must find a way to connect with the victim. Next, the attacker must install malware on the victim’s computer. Both of these steps can occur quickly and without the victim’s knowledge, depending on the attacker’s tactics. One way for an attacker to make a victim’s browser execute their malicious code is to simply ask the victim to visit a web site that is infected with malware. Of course, most victims will not visit a site if told it is infected, so the attacker must mask the nefarious intent of the web site. Sophisticated attackers use the latest delivery mechanisms, and often send malware-infected messages over social networks, such as Facebook, or through instant messaging systems. While these methods have proved successful to a degree, they still rely on tempting a user to visit a particular web site. Other attackers choose to target web sites that potential victims will visit on their own. To do this, an attacker compromises the targeted web site and inserts a small piece of HTML code that links back to their server. This code can be loaded from any location, including a completely different web site. Each time a user visits a web site compromised in this manner, the attacker’s code has the chance to infect their system with malware.

Common types of malware delivery mechanisms:

• Software updates: Malware posts invitations inside social media sites, inviting users to view a video. The link tries to trick users into believing they need to update their current software to view the video. The software offered is malicious.
• Banner ads: Sometimes called “malvertising,” unsuspecting users click on a banner ad that then attempts to install malicious code on the user’s computer. Alternatively, the ad directs users to a web site that instructs them to download a PDF with heavily-obscured malicious code, or they are instructed to divulge payment details to download a PDF properly.

• Downloadable documents: Users are enticed into opening a recognizable program, such as Microsoft Word or Excel, that contains a preinstalled Trojan horse.

• Man-in-the-middle: Users may think they are communicating with a web site they trust. In reality, a cybercriminal is collecting the data users share with the site, such as login and password. Or, a criminal can hijack a session, and keep it open after users think it has been closed. The criminal can then conduct their malicious transactions. If the user was banking, the criminal can transfer funds. If the user was shopping, a criminal can access and steal the credit card number used in the transaction.

• Keyloggers: Users are tricked into downloading keylogger software using any of the techniques mentioned above. The keylogger then monitors specific actions, such as mouse operations or keyboard strokes, and takes screenshots in order to capture personal banking or credit card information.

The malware business model
How do attackers use malware to turn a profit? They can use infected computers to generate income in many ways. One of the simplest is through advertising. Just as many of the web sites generate income by displaying ads, malware can display ads that result in payments to the cybercriminal. Alternatively, extortion is used. A large network of infected computers can be very powerful, and some attackers use this threat to extract payments from web site owners. A group of computers controlled by one attacker, known as a “botnet,” can send a large amount of network traffic to a single web site, which can result in a denial of service (DoS) attack. The criminals then contact the web site owner and demand a payment to stop the attack. Criminals also frequently use infected computers to gather valuable user information, such as credentials for online banking. This type of malware, known as an infostealer or banking Trojan, is one of the most sophisticated and stealthy forms of malware. The criminals can then use the private information for their own malicious schemes or sell it to a third-party who then uses it to make a profit.

What is blacklisting, and why is it important to avoid?
Because of the potential damage caused by malware, Google, Yahoo, Bing and other search engines place any web site found with malware on a blocked list, or “blacklist.” Once blacklisted, the search engine issues a warning to potential visitors that the site is unsafe or excludes it from search results altogether. No matter how much search engine optimization you do, if your web site is blacklisted the impact to your business could be devastating. This blacklisting can occur without warning, is often done without your knowledge, and is very difficult to reverse. Taking the proper measures to prevent search engine blacklisting is critical to the long-term success of any web site.

Search Engine Optimization and you!

One of the first ever sites or at least the second or the third at any rate, website that I ever visited on the World Wide Web on the Internet in 1994 was Yahoo.com – then the sort of the default search engine. That site remained ‘sticky’ for a quite a long time beating others like Alta Vista, Lycos et al. and saw the dawn of the ‘portal’ before it lost the plot to management and other challenges. Not the least of which was Google with a similar set of founders and a better search algorithm! Search has always been critical in the age of the machines and large databases and the interlinked computing that was spawned by the DARPA and its packet based networks back in the day. Earlier computers and networks had something called gopher, which had crawlers, and bots that indexed systems automatically and returned search results. Modern search in the age of the World Wide Web and services is about relevance, about making sense of all that data and getting to what you need. Which often is not easy since most of the times we may not have a clear definition of what is it that we need much less in the language that a computer system understands. Thereby came the inexact science of the ‘search term’ – a set of words that define perhaps what we need – ‘cars’, ‘insurance’ and you know the list goes on and Google the god of search is essentially an auction house for search terms by the minutes in real time dollars and cents that amount to the billions that they rake in.

Since the entire core or the search engine at the heart of the operations was algorithm based for relevance – this also gave rise to an attempt and now an industry at optimization of the content, layout, and renewal/refresh mechanics, of a web site that ensured that it came up on the searches in and around those valued search terms. This was SEO – or search engine optimization and the focus like legendary computer nerd cliques of the open source community etc. was to do the right thing or organic search result manipulations. When you walk into the murky world of manipulation it often gets murkier and so there it was and an evil twin was spawned and this was SEM – search engine marketing – an euphemism where the focus was to actively ‘game the system’ through link buys, link exchanges, paid inserts, you name it. The objective to appear on top of the search engine results – the thinking being (and it may probably also be right) that most people cannot read beyond the first two or three results and an even larger percentage cannot tell the difference between a paid insert and an organic result. The search world has never been the same again and today at the level of an industry has its own gurus and practitioners, experts, consultants, snake oil merchants, carpetbaggers and practiced maestros. A good place to begin to understand this search mechanics and its world is to go through its definition in Wikipedia – here and a great place to read further and dig deep into the world, access forums and other thinkers and practitioners is here and you can  get whitepapers and original research there. This one is from a little time back but still relevant and can be accessed here at and here is a quick tutorial web video here at and here is a list of some companies that come up in a search in Google of ‘SEO Companies India’

These are only some of the several resources that exist and the best place to look for them is as one may have guessed is on search engines like Google and Bing. The technology has reached now levels and has been for a while when the entire evaluation of a web site in terms of search engine optimal content and settings can now be automated and delivered as a service over the web. A great example and start up company here is at Hub Spot. This has also brought its own new challenges and the providers of search – the engines themselves work all the time to fine tune their search engine that he hacks that people have found which may game the system are continually beaten. A recent result of such a change is discussed here.

Hacking the mind – the next generation technology frontier !

The term hacking brings up a lot of connotations some of which are even close to what it exactly means. So to begin we need to get an idea clearly of what hacking is and what hacking does, the criminality etc. The word roots are in medieval English where it meant to cut away and clear a path sort of a thing and a good explanation is given here.

It is a lot like you cut away the detritus and make sense of what you have in place. In computer terminology and more closer to what it now stands for today – it is about making a connection cutting through perhaps layers of security of non compatible communication protocols. To hack into a network is al about making a connection and getting inside a place that you may not be legally or rightfully supposed to be at.

An Apple iPad a tablet computer for example as sold by its manufacturer – Apple Inc does not let you access its root – the core complete control of the system so to speak. They the manufacturers feel that you need to not have it and that it perhaps makes it les secure and possibly they want to control what stays on it or runs on it as in software applications that get the best value of the underlying hardware. To use such a computer to its 100 or more percent capability – you need to hack into it and the method specific forth at class of devices is called ‘jail breaking’ – an American nuance!

Hacking in a way makes a system better than what it was initially or rather at any rate and with the Apple example unlocks key functionality that was hidden or locked by its makers! Hacking the human mind is a lot like that. You don’t really like send it alphanumeric instructions on SMS or electrical signals – but in a away what is happening is that the neural networks in the mid are firing in different ways – seeking out new pathways and the neurons are signaling and getting the thing done!

Wired magazine for example has been for the last few years now several articles of this kind of human hacking and what they have achieved. The most interesting was the tale of Scott Adams, creator of the Dilbert series and who had discovered that he had a new kind of a nervous ailment that would not allow home to speak as well as he would have liked to. One way around was he found if he could perhaps sing what he wanted to day and he would be able to say it.

Interest in hacking the mind came about as a result of perhaps brain injuries or nervous disorders or something that has upset the standard pathways and caused a need for new ways to be found or new neural pathways to be created. Occasionally it is about recalling and recreating memory. A great example has been of course Chris Nolan’s movie Memento, which is probably the best example of how the human mind can adjust to a situation where no short-term memories are formed! It would be a lot like if I started the article and forgot what it was that I wanted to write about or what the entire point was?!

Chris Nolan took the discussion a notch ahead as artistes and visionaries are wont to do and in Inception he gave us the possibility of power over our dreams. A very powerful concept that has been shown to be at least theoretically possible and the dangers therein. But the concept is not in a way entirely new. Ancient yogic beliefs in India and then from elsewhere like China have shown that there are methods and practice that can make us function in a more optimal manner. At its very simplest and from my personal experience over the years – you can train your mind to for example wake you up at a certain time from sleep like an alarm clock without actually a real alarm clock outside our mind!

I think in the near future and in the really short-term, say like a quarter or three away – I am expecting actual products that will hardwire or rewire your mind to do things. A little like say certain kind of fast moving color and imagery from a TV for example can cause an epileptic reaction in those thus afflicted. I imagine the possibilities if such subliminal imagery can be ‘piped’ / sent to you through the Internet, cable TV, radio satellite – what have you and any other means that may be discovered and things like LTE (long term evolution) and 4G come to mind immediately and imagine that this can be sent to you like a small instruction set that can be picked up and implemented!

To get an idea of what that might look like you can consider the way SIM cards get updated, upgraded, set instruction sets over the air like maybe an SMS or a bits and bytes data set that they can implement.  You can set up roaming or disable or disable all services permanently from an IMEI number. This brings up of course the specter of mind control and what have you but I will always opine that are we not under a bit of or quite a lot of mind control as it is as we are where we are today? We have advertising, we have government propaganda, we have PR and media, we have search engine optimization and many other means that make us believe what we see and get millions into a way of thinking and buying and many other activities for example.

There will be a need for enough legitimate mind hacking products and services like say smoking cessation programs and from that a host of such cessation programs like crime and violence that will always justify its need and existence. People will try to work with the risks and benefits doing the fine balancing act, which always threatens to tip at any time. This is because along with all this and more there is a huge lot of possibility and potential for all manners of scurrilous activity, cheating, forgery, bad stuff. The way I see it people like RSA, Mac Afee, Symantec, Verisign the security guys, the validation guys, the guards and checks and balances and the validations, the keys and the encryptions and the gateway security are going to have a great time in terms of business opportunities.

Portable computers, smart phones and you

The personal computer as we know it was invented 27 years backimage in August 1981 and ever since then they have wanted to make it small and portable and with you all the time. Not so much perhaps in 1981. Laptops began from a desire to have a full-featured computer that could be easily used anywhere. Their predecessor was called the luggable. These all-in-one systems could be easily transported, but were heavy and usually were not battery powered. The CRT (cathode ray tube) was one of the major reasons luggables were so large and heavy, but the use of a full-size desktop motherboard with room for ISA expansion cards was another size factor.image

Alan Kay of the Xerox Palo Alto research center was the first to come up with the idea of the portable PC in the 1970. Kay envisioned a portable computer much like the ones found today. Something small and lightweight that anyone could afford. The first notebook that was actually built in 1979 by William Moggridge who was with Grid Systems Corp. It featured 340 kilobytes of memory, a folding screen, and was made of metal (magnesium). This was hardly like the laptop computers found today, but it was a start.

 
Arguably, the next mobile computer produced was in 1983 by Gavilan Computers. This laptop featured  128 megabytes of memory, a touchpad mouse, and even a imageportable printer. Weighing in at 9 pounds without the printer, this computer was actually only a few pounds heavier than notebooks found today.
Gavilan later failed largely due to their computer being incompatible with other computers. Mainly because the Gavilan laptop useimaged their own operating system.

 

Apple Computers introduced the apple IIc model in 1984, but it wasn’t all that much better than what Gavilan had produced a year earlier. It did feature an optional LCD panel which had on impact on later notebooks. Finally in 1986 a true laptop was created by IBM called the IBM PC Convertible. It featured two modern 3.5 inch floppy drives, and space for an internal modem! Also found on the Convertible was an LCD screen and basic applications the user could use to create word documents, and schedule appointments. And from then on there was the talk of ‘ubiquitous computing’, everywhere, insidious and all pervasive! I think we are now at that point in technology development and our own evolution where that time has indeed come. One of the steps in this leap was when the microprocessors were introduced and engineers thought of what next? Next was processors in everything, automobiles and other automatons. The robot as we know of them are automatons with processors in them.

For humans ubiquitous computing came in with mobile phones wanting to be smart and the personal computer wanting to be small. The sub note book, the UPMC has been around for a while now but I think it was Nicholas Negroponte’s OLPC that really brought the cat out of the bag at a proposed $100 a piece! Since the machine would run an AMD Intel had to do the defensive thing and introduced a mobile platform smaller than the Celeron with the Atom. Soon the Classmate PC on intel and a toned down version of Win XP was a viable alternative. Before long Asus a Taiwanese company had the Eee PC out and the sub note book was a reality! It wouldn’t perhaps be fair to call them pioneers but they did take the leap of faith to actually commercialise the concept of full function ‘sub’ computer at a price of $300 a piece! Today mainstream computer manufacturers despair the growth of these sub net books at sub profitable prices with negligible margins and question how long they will last? With growth of the smart phone and a capability on the mobile, why would you need a sub notebook – except what if it were not the sub – but the main notebook – and the next device was the smart phone like the iPhone with a totally different interface and input/output. No more mice and keyboards!