Brian's Blog

15

Over the past year, government agencies, at both federal and state levels, have made significant progress combining big data solutions with advanced analytics to increase the efficiency and effectiveness of government services. Helping to both facilitate and learn from these advances, IBM has sponsored the research and publication of five studies over the past year. In these studies, we share both valuable methodologies and document outcome-oriented best practices.

In my next few blogs, I’ll share a few of the highlights from these studies to help give a window into which studies will be the most helpful in which situations. Specifically, these studies include:

In this blog, I’ll start with an overview of Demystifying Big Data. This study was cosponsored by IBM and involved an industry-wide collaboration from participants from over 25 private, academic, and public sector institutions. The study was divided into five broad topic areas:

  1. Big Data Definition & Business/Mission Value
  2. Big Data Case Studies
  3. Technical Underpinnings
  4. The Path Forward: Getting Started
  5. Public Policy

I won’t go into great detail in each of these topic areas – those can be read and studied in detail directly from the report. Nonetheless, from my point of view, there are two main takeaways.

Big Data Standards

Over the past fifteen or so years, organizations have derived standards in terms of enterprise architecture and implementation methodologies for all of the major development “environments”. For example, there are standard architectures for J2EE, .net, and LAMP architecture environments. In addition, through such organizations as PMP, there are standards for implementation. And as government organizations requirements for custom applications burgeoned over a decade ago, the federal government developed its own standards relative to these three leading architectures. It is these standards that facilitated the effective growth of application development and implementation that has led to a great degree of increased efficiencies in the broad areas if internal operations and citizen engagement.image

It stands to reason that in the area of big data and analytics, standards are equally as important. In demystifying Big Data, the Federal Big Data Commission sets a series of industry-wide standards that will empower government agencies, at all levels within the United States, as well as globally, to efficiently tackle the implementation of big data and analytics. The figure above is a snapshot of the standardized big data enterprise model.

In a previous blog, I review the definition of big data. In this definition, I highlighted the three Vs: volume, velocity, and variety. In this study, big data is expanded to include a fourth V, veracity. From an analytics perspective, veracity makes a lot of sense in that in dealing with big data. There is a varying degree of trustworthiness in the data and by highlighting veracity, data issues underscore the necessity of analytics in making sense of the data.

Getting started

The second major value add that the study provides the industry is providing a road map for organizations to get started in reaping the benefits of big data with real-world examples. The study organizes these means of getting started as:

1. Define business requirements: Above, I compared the similarities of big data systems to that of any other system implementation. And by starting with requirements definition, the Federal Big Data Commission underscores that similarity. All effective system implementations start with defining the business requirements. It is important to highlight that the Commission also states specifically that:

The approach is not “build it and they will come,” but “fit for purpose.”

2. Plan to augment and iterate: The commission also points out that successful big data initiatives “favor augmenting current IT investments rather than building entirely new enterprise scale systems.” As I mention in an earlier blog, there have been a lot of quality system implementations across many functional areas over the past twenty years that collect and utilize a lot of data. Successful big data initiatives understand these prior successes and use these initiatives as springboards for additional value-add business success.

3. Big Data entry point: When starting a big data initiative, an organization neither starts from scratch nor starts with all four Vs. Therefore, organizations increase their odds of success by one or more entry point and build over time.

4. Identify gaps: Similar to most successful system implementations, the successful big data implementations tend to utilize an Agile-like implementation methodology that identifies gaps in the initial requirements and iterates as it fills these gaps. This enables early successes while obtaining buy-in from stakeholders.

5. Iterate: The gaps are filled via iterations as are the platform capabilities. The Big Data Enterprise model illustrated above is a comprehensive solution. But in utilizing an iterative approach, that enterprise model can be completed over time in parallel with the business requirements gaps.

When you put both of the above major take-aways together, what it really says is that “big data” doesn’t have to be so big that it can’t be implemented in reasonable, bite-sized iterations. Big data should be implemented with one eye on enabling short-term value and the other eye operationalizing for long-term returns.

Actions: E-mail | Permalink |

Latest Tweets

Search Blogs