Table of contents for Teradata Influencers Summit 2015
- Teradata Technology Innovation #TD3PI
- Teradata Marketing and MGM #td3pi
- Teradata Strategy Overview #TD3PI
- Teradata Unifed Data Architecture #TD3PI
- Teradata Listener #TD3PI
- Think Big Consulting Update #TD3PI
- Think Big Data Lake Program #td3pi
- Real Time Big Data #TD3PI
- Teradata Aster Comments #TD3PI
- Teradata Aster AppCenter #td3pi
Ron Bodkin of Think Big (founded in 2010 and acquired by Teradata in late 2014) came up next to talk about their consulting practice around hadoop and other big data infrastructure. They are now over 100 people (on shore and off shore) with a 100% focus on Big Data technology. They are run largely separately without any specific focus on Teradata or Teradata customers. Customers range across all industries but generally around consumer behavior data or product behavior data (sensors for instance). While mostly focused on consulting they have also developed a dashboard engine for hadoop (of which more later).
The service portfolio covers
- Big Data strategy and roadmap
Strong adoption with lots of companies struggling to get from experimentation to something serious and others who don’t know where to start. Critical issue is how to get beyond a first use case.
- Data Lake implementation
Initial hadoop deployment are often very focused and so the tools and techniques used don’t scale. Properly designing a Data Lake in terms of automation, data variety, governance, provenance etc. This is about helping companies scale their initial big data efforts.
- Analytics and Data Science
Lots of technology out there but still tough to find skills, driving consulting needs. Last mile access to the data lake for business users also critical.
- Training and Managed Services
Application support, platform support, retainers, staff augmentation etc.
The overall environment Think Big develop is based on a data lake but they see that this lake needs to contain a data repository for more trusted data as well as data lab areas for more experimental work. All this requires metadata, governed ingestion etc.
Clearly still largely focused on descriptive analytics based on hadoop – still waiting to see significant data mining/predictive analytics using hadoop data.