Don DeLoach's CEO Blog
When I joined Prime Computer in 1984, there was an emerging market call "GIS" which stands for Geographic Information Systems. This had basically evolved from "computer mapping" then "AM/FM" (Automated Mapping and Facilities Management), which basically utilized thematic maps to mostly overlay utilities' infrastructures on maps. Then came ESRI (the Environmental Systems Research Institute). While ESRI had been around since 1969, they were really pioneering the use of data in the world of computer-based mapping. The maps themselves were merely a spatial lens into the data. ESRI was a big partner of Prime, and I really liked what they were doing. By 1987, I had become a "GIS Specialist." In reality, I knew very little. But GIS was a very, very hot topic then. Everyone wanted to get in on the action. And because of this, I learned a very simple but important lesson: if you know 15% of a topic that almost everyone only knows 2%, then you appear to almost everyone as an expert. Really.
Nowadays, everyone wants in on Hadoop. The more advanced people extend these discussions to NoSQL as well. Yet, many, many people in the technology space still don't understand much about Hadoop, other than they must need it and need it fast. This was the GIS phenomenon in 1987. I was recently at a conference where there were several big Hadoop shops. There were also several, seemingly sophisticated organizations expressing the need they felt to move in that direction, though they clearly had no idea what they were really talking about. Many of these people were higher up in their organizations, so perhaps there were much more sophisticated technologists involved below them in their organizations. But decisions are often made by people who really don't understand much about what they are doing. I hear that from time to time from my own team! That is actually OK to a point, but I truly believe it makes great sense for decision-makers to understand very, very basic aspects of technology when the well-being of their organizations are dependent on technology and the decisions they either make or approve. So here goes my quick attempt to get the Hadoop 2%ers up to 10% to 15%.
What Hadoop is:
- A collection of (free) open source programs available from the Apache Foundation. These programs mainly include a file system, capabilities for writing processing logic, distributing processing over large numbers of systems and gathering back the results, and creating a data warehouse structure for summarization, query, and analysis. Hadoop is continually evolving via the Apache Foundation.
- There are a number of companies who have commercialized the distributions to provide added value capabilities and support that make Hadoop more desirable in commercial production environments. The main four are Cloudera, HortonWorks, EMC, and MapR. In these cases, it is, of course, no longer free. However, you are paying for the support and added value.
- It is a proven platform for storing very large amounts of unstructured data. This is more and more a need in industries ranging from digital advertising and social networking to financial services to telecommunications to government, especially defense.
- It is a platform that can scale, utilizing a large number of commodity distributed-processing resources effectively
What Hadoop is not:
- It is not a silver bullet that will solve all your technology problems
- It is not a technology that can be deployed without administrative people to establish or maintain the environment, so there are people costs involved
- It is not an interactive environment (in and of itself), nor is it a real-time system
- It is not mature (yet), but it's definitely getting there
Where it can often be really effective:
- Your organization has to deal with very, very large amounts of unstructured data, like a video advertising organization such as LiveRail
- Your organization needs to sort and manipulate large amounts of unstructured and semi-structured data, such as a developing and deploying mobile advertising campaigns like inMobi
- Your organization needs to index large amounts of data, such as an online brand management like AdSafe Media
Where it is not as effective:
- When you are constrained in terms of talent. Hadoop talent is much in demand, in part because it is quite necessary for establishing and maintaining a Hadoop environment. And as a result, as you might expect, Hadoop expertise is not inexpensive.
- If you run an operation with limited numbers of servers, it is more difficult to take advantage of Hadoop's capabilities. The same is true for available storage.
- If you need to perform an very heavy computational analysis against a small amount of data
- If you need to perform interactive investigative analytics like ad-hoc queries agains large amounts of data
Basic commentary: The main takeaway I always suggest to people is that there is no silver bullet. In reality, Hadoop is often used in combination with other technologies. Of course, I was prompted to write this based on the growing number of Infobright customers using Infobright in combination with Hadoop. This is a very simple, yet powerful approach, whereby Hadoop may provide a main repository of mountains of semi-structured and unstructured data, but periodically run Map-Reduce jobs to collect smaller mountains of data which is moved into Infobright. That can be automated and very easy to do, and once the data is in Infobright, it becomes very easily interrogated, including robust ad-hoc query support. It also is then accessible using the BI tools sets used in almost any company, including Jaspersoft, Pentaho, MicroStrategies, Cognos, Actuate/BIRT, Business Objects, and many, many more, not to mention Java, PhP, and other common programming languages as well.
But it all comes down to a few basic questions. What are you trying to accomplish? How much data to you have and what does it look like? What do you need to do with the data? Who needs to use the data and in what form? If you want to expose information through a portal for customers to interactively inquire about their operations, you will take a different approach than if you want to provide a repository for archiving documents. Again, it all comes down to the use case, and seldom is there one technology that solves all the problems...even Hadoop. But the right technologies, deployed in the right combinations, can be very, very powerful.
One last thought. There are a number of major players in the Hadoop and NoSQL market that have communicated that as knowledge of Hadoop grows, the true need for most executives to really understand it will actually diminish. The reason for this is that the true value of these technologies will ultimately be delivered as an underlying component of the applications that utilize them. I totally agree with that. In fact, more and more of our customers and revenues are a function of exactly that model, where Infobright is embedded in applications delivered by our OEM solution partners. And I think this is going to be true for many of the emerging technologies we are seeing as well.
If anything, the message to CEO's should be this: make sure you have very strong technology architectural talent in your organization. Make sure they are aware of and conversant in the technology landscape, as your real opportunity will truly come in the combinations of the right technologies for your business. Doing this right can deliver significant competitive advantages in terms of increased savings and advanced capabilities.
By the way, Infobright is used alongside Hadoop with each of the examples in this blog. And many, many more.
I have been in the technology business all my life. I have worked with some inspirational sales leaders and executive leaders. I have known fantastic sales support people, and administrative people who deserve more credit than could ever be given. Likewise, I have seen examples on the dark side. These include sales people willing to twist the truth for a deal, executives engrossed in themselves and icons of hypocrisy, and technologists who were often wrong but never in doubt, causing undue damage to those around them.
Without elaborating on anything other than the technologists, some of the great sales, operational, or executive leaders were Steve Capelli at Sybase, Chuck Wilmoth at Prime Computer, my late friend and CFO of Aleri Janine Condor, and Mark Logan and Brian Ladyman at YOUcentric, though all for different reasons. I will save the what and why regarding them for a later day. Today, I want to speak about four technologists who have impressed me above and beyond the rest.
The first is Jim Dow. I have long lost touch with Jim, but he ran the Computer-Aided Engineering group at Southern Company in the early 1980s. Jim had a profound understanding of technology architecture and its technological and business implications. Jim was frustrated with incompetence and delighted in true advancement. He was as articulate as he was smart, and would speak often and passionately about the implications of new technologies and on the challenges faced on many fronts.
The second is Dave Walker. I met Dave when he was a consultant, acting in an interim VP of Technology role at Aleri when I took over there. Dave is the complete package. He has the instinctive architectural understanding of Jim Dow, but couples that with very pragmatic, hands-on delivery to customers with exceptional results. Customers love him because he delivers way more value than what he is paid, which is non-trivial. But Dave is also a great guy. He is ethical and straightforward, clear in his communications and upbeat in his demeanor.
The third is Jerry Baulier. He actually took over as the CTO at Aleri and drove the development of the Aleri Complex Event Processing System. Jerry is a deep database technologist, coming out of Bell Labs. He has a profound respect for technology, not unlike many of the colleages he brought on board (spectacular in their own right) like Jon Riecke and Scott Koledczieski. Jerry is understated but firm, and cares deeply about both his team and the products they build.
The last is our own Graham Toppin. Graham is a bit of a renaissance technologist, with perhaps the broadest range of techology understanding I know. He both loves technology and truly appreciates the future, even though some around him can't always see that far. He is at his best when engaged in the healthy debate over various aspects of technology, ranging from ways to solve a significant challenge to the pros and cons of an architectural vision. He seems to get the linkages well before they become clear to others, but crafts that vision into what we do—which is deliver a product uniquely suited for storing and analyzing machine-generated data. This is awesome today, but in Graham's view, never done. As it should be. And he cares as passionately about customers being successful with our products as any human being I have ever seen.
So what message do I get from this? First, a good technologist has to be passionate about technology. It is more of a calling than a job. Second, they have to put in the time. I guess that, in part, is fueled by the passion, but in each case these people invest countless hours to understand the landscape and what the broad scope of technology opportunity means in the context of any of their initiatives. And third, they have to be able to communicate this opportunity. This is an art form. Some have it, but few have it like these guys.
One last note. Roger Bodamer is an advisor to Infobright. Had I worked operationally with Roger, I am sure he would be on this list. He fits all of the traits and then some. He is a technologist I would bet on all day any day, and many have and still do. We love having Roger as an advisor, and on a personal note, as a friend.
To be honest, I don't think I really know what I am talking about. So perhaps I should have titled this "Why I am guessing Database Schemas really matter". These thoughts are borne out of listening to people whom I think are seasoned database people, and certainly not attributable to any deep insights I have produced. My book "Lessons from a Database Savant" will have to wait. Sadly.
But I do listen. And I go to a lot of meetings. I meet a lot of people, many of them smart database people. What I have observed is that the database landscape has changed. Not many people doubt that. A long time ago, databases were hierarchical like IMS on IBM Mainframes. Then came System R and relational databases. These were row-oriented databases, and the database schemas were often developed using the concepts pioneered by the likes of Bill Inmon and Ralph Kimball. These were enterprise data models with highly normalized data (Inmon) or star schemas and snowflake schemas (Kimball). They worked well. But today there is a variety of databases - from row based, to column based, to various permutations of NoSQL databases - where the schema approach should absolutely contemplate the use case and the underlying database characteristics. An enterprise data model that works great for an Enterprise Data Warehouse for a retail store is unlikely work as well for the data mart used to store sensor readings for smart grid deployments.
The issue is sometimes people use the notion that "what worked in the past is good enough for me" mentality. That may be true when it comes to cooking ingredients or personal hygiene, but in a world that changes as fast as the technology world does, there is a penalty for not keeping abreast of changes.
We have seen people be wildly successful using our technology with schemas that were just counterintuitive to what they were use to doing. De-normalized data in "big wide" or "fat" tables with bazillions of records may just seem wrong. And yet, we see this working every day. We have also seen instances where there was an insistence on doing things "like I always did" that did more harm than good. Really. It's like buying a Ferrari for off-loading. Good car, wrong use case.
One of the biggest reasons technology fails is that it is too complicated and people don't know how to use it. Another is that it is NOT too complicated, but people refuse to use it as it as intended. The Israeli physicist and thought leader in manufacturing systems, Elijah Goldratt, once complained to me when I was attending his workshop that his biggest issue was that companies would bring in his system, OPT, and run it, then disregard the results because they were counterintuitive to what they thought should be done. He finally co-authored the book "The Goal" to explain the concept better so people would stop disregarding the results.
At Infobright, we are HUGE believers in simplicity. And the database schema definition, while usually very simple, is really important in achieving superior results. And superior results, after all, are pretty important to most everyone.
I think there is a lot to be learned from paying close attention to what is successful. I think there is as much or more to learn from failures. When it comes to IT projects, I have been exposed to a large number of "fantastic learning opportunities." Many come to mind, but I thought I would comment (without mentioning the specific companies) about the three most egregious cases I can recall.
The first was back when I was a co-op student at GA Tech and worked for the local power company. I was basically a kid, and knew less than nothing. I thought everyone around me was a genius. For a while. There was a project called "RMS" for "Records Management System" (clever, huh?) that was going on across the hall from my office. This was to support the nuclear power plant that at the time was under construction. At first, there were about ten people doing nothing but working on this. Cool. Then there were twenty. Wow, it was like a club. Then forty. Then fifty. Lots of meetings. Several managers, who were the clear power brokers. I had no idea what was going on. Then I started to hear from some people not associated with the project that it was not going well. The people on the project laughed a bit less. The managers yelled a bit more. Then one day, after two years, they were gone. Done. Finished. No more project. As a kid, this seems just really stupid. How would a very big company filled with super-smart professional people work on a project for two years with fifty people then just shut it down? How little I knew. Welcome to the real world.
Many years later, I was deeply involved with a bank in Charlotte that was implementing a massive CRM project. Key management talked about it as if was going to boost earnings by 40%. It was pretty clear that they knew just how clever they really were. They were leading-edge, insightful leaders, ready to pounce on opportunity…. POUNCE!!! The pounce turned out to be more like the cat mistakingly jumping off the fourth-floor terrace of an apartment building. The big bang was more of an eight-digit thud. The plans had been very aggressive, very elaborate. Demos were great. Except demos are often the best representation of a system you ever see.
The last was in the mid to late 2000s, with another bank, this time in Europe. The bank had a very aggressive plan to change its core banking system. The "old one" was, well, old. It worked, but the screens were tired. The architecture was the old Tandem NonStop operating system. So old. Need new. The new system was sexy. Sexy screens. Sexy innovation. But the thing about core banking systems in large banks is that they are tied into everything. It's not like you are unscrewing a doorknob and putting a new one on. It's more like rewiring and re-plumbing the house without turning off the lights or the water. I was later told by an ex-employee of the bank that by the time they shut the project down, they had spend a whopping 138M Euro. That's million with an M. Makes the CRM miss look like a rounding error.
What do these three projects all have in common? Probably a great deal, but the characteristic that jumps off the page is that they all had highly elaborate, complex plans that involved huge numbers of people, significant budgets, and lengthy schedules. In each case, the failures were not spotted early, nor were they halted once the issues began to surface. The leadership doubled down. Money and people were thrown into these in each case to salvage what looked to be sinking ships. Yet they all sunk anyway.
Don't get me wrong. There are successful projects that are elaborate, complex, expensive efforts involving a great deal of time and people. But the more complex, the more time, and the more people you put into the mix, the more likely you are to see the project go off track. This does not even begin to explore issues around management, issues associated with the rapid changes on the business environment and the relationship between those changes and the complex plans, or issues associated with changing technology and the relationship of those changes to the projects. It also does not consider the impact of certain corporate cultural phenomenon on these projects either.
My take away? Simpler is better. Less is more. Shorter works. Do bite size projects that align with long-term objectives. Make sure everyone clearly understands the objectives. Succeed in small ways, then iterate. This is less glamourous, but it works.
The power plant got built, but way over budget. The bank put two different CRM systems in. The simpler one was a much greater success even though it was a fraction of the cost. And the core banking system project was scrapped and the old Tandem based system was extended. To my knowledge, it is still running fine to this day.
Home-run sluggers are notorious for also being strikeout leaders. That may be an acceptable tradeoff in baseball, but it is an expensive, disruptive, and more often than not, ill-advised approach to technology projects. It's like the Brad Pitt line in Moneyball, "What do we want Pete?" To which Johah Hill replies, "To get on base."
May your projects be simple and successful.
Yours, not ours. Well, that's not entirely true, yours and ours. We believe if we can help organizations save a ton of money, then we can make a lot along the way. Everybody wins. Or almost everybody. When organizations choose to bring in an Infopliance, they will be doing that in lieu of an alternative. So let's explore that. Why the Infopliance?
Over the past two years, anyone who has followed Infobright has surely noticed our focus on machine-generated data. This is because our software has some unique advantages when this is the type of data being stored. This is normally associated with weblog analysis and online or mobile analytics, the storage and analysis of call data records, applications involving storage and analysis of sensor data like smart grid technology, and the the storage analysis of network events, IT logs, and other related use cases. Our customers include a strong growing list of AdTech players like Yahoo!, AdSafe Media, LiveRail, Bango, and many others. They are solution providers to the telco and network analytics space like JDSU, Mavenir, Polystar, IMImobile, Sonus Networks as well as others in the security space like SonicWALL, now a part of Dell. Our website has case study after case study, and our YouTube channel has a number of excellent videos showcasing these success stories. And the reason our joint efforts with our users are successful is simple: we provide a platform for storing and analyzing machine-generated data that delivers great performance, especially when you need to be able to do ad hoc queries and data mining, with superior disk compression and virtually stripping out the database administrator requirements. So in the end, we deliver great results for an exceptionally low total cost of ownership. It is inexpensive and easy and quick to get up and running and stay up and running.
In the last year, many of our customers have had explosive data growth, and we have been brought into a number of new opportunities that have a starting point of much greater volumes of data. While the average starting point in the past has been 1TB - 2TB of data, we are seeing more and more starting at anywhere from 10TB to 50TB. And in these instances we are seeing that a natural consideration is to look to general purpose database appliances like a Teradata, Netezza, or Oracle Exadata. These are great solutions. They all have their pros and cons. All products do. I have a healthy respect for these companies and their solutions. They are designed to provide an easy-to-establish and maintain enterprise data warehouse or even a mixed workload, more comprehensive environment. That is not our thing at all. We focus on machine-generated data. So the more extensive platform does way more. But herein lies the driver as to why we are introducing Infopliance: many people are buying general purpose data warehouse appliances and storing machine-generated data, and just machine-generated data, on those machines. It works, but it's a very, very expensive path. Everyone who has done this admits it freely. This is not like a state secret. So we saw a gap in the market, where as more and more organizations are storing larger amounts of this type of data, we could offer the industry's first purpose-built machine-generated data appliance, which would cost significantly less than the more general purpose alternatives.
And the icing on the cake is that while Infopliance acquisition costs are lower, and the operational costs are lower as well, there are additional unique capabilities that you would not get on a general purpose appliance. Capabilities like utilization of our Knowledge Grid, where everything is tantamount to being indexed, although no indexing or other DBA work is required. Capabilities like DomainExpert™, allowing you to exploit the patterns in the data to more efficiently leverage the Knowledge Grid for even greater performance and compression. And capabilities like Rough Query, providing innovative investigative analytics techniques that allow for blindingly fast interrogation of huge datasets in order to better narrow your scope, much like a detective would do in a crime investigation. The way data is interrogated must change as the realities of Big Data sink in. And they will. But still, at the forefront in many minds is the need to store and analyze much more data without spending millions and millions. And that is the primary capability we can deliver with Infopliance.
We believe the machine-generated data market, and more broadly, the proliferation of mobile devices, and the explosion of a Machine to Machine (M2M) world will reveal overwhelming demand for this type of offering. And while we expect there will be other companies to follow in time with their own purpose-built appliances for machine-generated data, we are proud to get here first; we are especially proud of our patented technology that underpins this offering; and we are most proud of the strong customer and partner base that has validated our approach more and more over the last few years.
I learned of, and subsequently got to know, Gary Angel long before I joined Infobright. He and his firm, Semphonic, have quite a strong reputation in Web analytics, especially in the online merchandising world. Anyone dealing with Omniture, Webtrends, or similar online data has probably heard of, if not worked with, Gary and his team. We recently kicked off a joint initiative with Semphonic and our partner Pentaho specifically around Web analytics and digital measurement. While I am clearly biased, I think the work they do is great, and find it to be a very cool space. I am amazed at how far it has progressed and how sophisticated it has become. The idea of individual targeting on a product level has clearly evolved to a place where the product initiatives are enhanced by much more macroscopic analysis based on ultra-rich data and the ability to see and act quickly on opportunities that may have been opaque in the past.
Next week I will be attending the X-Change conference put on by Semphonic. It will be a sharing of ideas among some of the most sophisticated users anywhere. I am looking forward to hearing the stories. I am looking forward to interacting with our customers who are there. But mostly, I am looking forward to learning more about what once may have been technologically possible, but far-fetched, but which has now become practical, and a reality for many.
I think some people see machine-generated data as one of the more boring subjects on Earth. I find it stimulating. The range of ultra cool things that are fueled by machine-generated data is amazing, and most certainly exploding.
Watch the short video: Don discusses why so many companies involved in online and mobile advertising depend on Infobright to enable their customers to do fast ad-hoc analysis of their web advertising data.
Rigid flexible partnerships. That doesn't really make sense, now, does it? Is a good partnership rigid? Is a good partnership flexible? What is a good partnership? For that matter, what is a good partner? I had this discussion recently with a prospective partner, and I think there are two very important sides to any good partner, one of which is rigid, and the other flexible.
There are certain things, let's call them "best practices" for the sake of appealing to those of you from B-school, that are present in most well run companies, especially smaller ones experiencing high growth. These range from organizational issues to the sales model to hiring practices and more. Many would like to think that these processes could always be rigidly applied. That is true to a point. There is a tendency in smaller, high growth companies to "do what you need to do to get the job done". While that might seem to be a great attitude, it can fly in the face of taking undue risks that might be a rounding error for a multi-billion dollar company, but could well put a smaller one out of business. The biggest area this manifests itself is in contracts. It is one thing to be overly rigid, but it is another to be reckless. I have seen many occasions where the circumstance contemplated in a contract that would "never happen", happen. Good governance is not an inhibitor to a good business, it is an enabler. Assuming the products are good and the sales execution is there, it allows companies to grow and scale. In that regard, it is also good for customers and partners. The reckless behavior of a vendor in pursuit of business should be more of a cautionary flag than a green one. So in that context, the rigid partner can be quite good.
But rigidity is only helpful if the business moves forward. While bad governance of a seemingly strong business can become, sometimes overnight, a failing business, good governance of a failing business is still a failing business. The goal is to prosper. What gets you there is often difficult, but seldom mysterious. You get there by helping your customers and partners prosper. In order to do that, you have to understand their goals. You need to be willing to invest in their business, and often, in them personally. This is not only good business; it is often a very rewarding journey on multiple levels. I feel better when our customers and partners reach their goals. I feel better when they make more money. I feel better when we make more money and reach our goals. I feel better... when they feel better. Not just JDSU or Adsafe Media or Bango or Mavenir, but Chuck or David or Chris or Tim or Ray or Terry or any of the people we rollup our sleeves with and work alongside. And guess what? Sometimes a rigid interpretation of a contract would have us roll back down our sleeves and check out. That is totally permissible. It's just totally out of synch with a desire for mutual success. And it's not us.
Good partnerships, like a good marriage, go way beyond the happy words and rhetoric. There is a visceral quality to what happens that cannot be faked, and that cannot be relegated to a three-part color brochure. And also like a marriage, things seldom always go right. But it's often those times that really define the strength of the relationship.
I truly believe we are rigid with good reason, and I also believe we are flexible with good reason. You can be both. I think it's almost the ultimate "best practice". But whatever anyone else thinks, I think it's who we are. And that makes me feel good. Really good.
Talking about machine-generated data is nothing new at Infobright. We do it in the mornings, in the evenings, in a house, with a mouse, here and there.... Everywhere! (apologies to Dr. Seuss). What makes what we do so increasingly compelling is the massive increase in machine-generated data in the world. More mobile phones. More smart devices. Increasingly sophisticated sensor devices with an increasingly broad range of use cases. Can you see how this would explode exponentially? Is it any wonder Cisco and IBM and others continue to put out report after report about the massive, massive contribution machine-generated data has to the overall expanding world of "Big Data" ? And when it comes to the elephant in the room for smart devices, Apple stands alone. Or, should I say, Apple squashes alone. At least for now. We all know the mighty will rise and the mighty will fall, but at last check, a "bad quarter" for Apple still looks like impressive market penetration. And in what must be a religious obligation to continue the culture of innovation (and dramatic flare) of Steve Jobs, the much anticipated iPhone 5 seems to be approaching its long awaited launch.
Smart devices generate a lot of data. iPhones, be it the original, 2, 3, 3S, 4, or 4S, all generate a lot of data since there are a lot of them out there (understatement of the century). So they generate, collectively, a lot of data, squared. Perhaps more. And sensors tend to generate a lot of data. Some generate the location of a sensor at a certain time stamp. Some, like those in the Samsung smartphone, (I think) generate barometric readings, some generate temperature readings at a certain location at a certain time, and all do this with a frequency that tends to create large volumes of machine-generated data. One of the more interesting new sensors is the Near Field Communications (NFC) chip. While there are a myriad of uses for this chip, the most obvious one is for turning your smartphone into a credit card, where you "swipe" your phone to pay for a transaction. It is expected that there will be a lot of these transactions in the future. And guess what? The rumor, to be certain, is that the new iPhone 5 will come with an NFC chip. So a device that is already the monster in the market and already generating vast amounts of machine-generated data will evolve to a point where it generates much more data.
We think this enhances the quality of life. We think this evolves society in so many innovative ways. We think this will be widespread, and not just an Apple phenomenon. But mostly, we think machine-generated data is cool.
And thus, we think NFC chips on a smartphone, be it Apple, Samsung, HTC, Ericsson, Nokia, RIM, Motorola, or others is really, really cool.
I must admit that when you hear all the discussion about how much data is being gathered about everyone, it instinctively does not feel so good. Is there no privacy left? Do I really need a PhD in Computer Science to understand how to provide minimal protection in an ever-increasing online world? It seems that every day you read about issues ranging from the exposure you have to hackers getting into either your accounts directly, or into large corporate files with access to hundreds of thousands of email accounts, to stories of the immense amount of data being collected to profile you down to things you may be thinking in the future but haven't occurred to even you, yet. I have to admit, with as much interaction as I have had with AdTech firms in the past couple of years, I have concluded that the vast amount of information being collected is, if anything, understated. The key question, though, is whether that is cause for concern or optimism.
To be certain, there are those in the world who will use whatever resources are available for doing things that are less than honorable. I think everyone gets that. (For that reason, we are also engaged with organizations whose sole concern is providing protection, but I will leave that discussion for another day). But the vast majority of organizations I meet are doing really cool things that truly do enhance the user experience and add value in an increasing number of ways. And to that end, this gathering of information for creative uses is not limited to selling jeans or hooking you up with concert tickets, but extends to so many walks of life. I recently read an article in the New York Times about how the capture of this type of information is being used to enhance both the targeting and delivery of higher education. This is really cool stuff, and it serves a higher purpose.
I think objectivity is essential when looking at everything. When it comes to the online world (now increasingly via mobile), there are certainly negatives. But there are positives as well. At Infobright we get up everyday thinking about storing and analyzing machine-generated data, which is really all of what we are talking about here. Sometimes I feel like we might be enabling the wrong things, but often, and I would say more often than not, we are enabling progress.
Previous Page Next Page