An Exclusive Interview with Mike Waas – One of the World’s Top Domain Experts on Database Research

Mike Waas founded Datometry in 2013, after having spent over 20 years in database research and commercial database development. Prior to Datometry, Mike held key engineering positions at Microsoft, Amazon, Greenplum, EMC, and Pivotal where he worked on some of the commercially most successful database systems with a special focus on query optimization and query processing. Mike is recognized for heading up the development of Greenplum’s ambitious query engine Orca, which has set new standards for MPP systems. Orca is available as part of the open source distribution of Greenplum Database and is widely used in enterprise database products including Pivotal Greenplum, Pivotal HAWQ, Alibaba ApsaraDB, and as a teaching platform in data science academia.

Mike holds an M.S. in Computer Science from University of Passau, Germany, and a Ph.D. in Computer Science from the University of Amsterdam, The Netherlands. He is considered one of the world’s leading authorities on the science of databases, and has authored or co-authored 36 publications and has 24 patents on the subject of query processing to his credit.

Mike’s vision for Datometry, an early-stage start-up, is to redefine enterprise data management: its Adaptive Data Warehouse Virtualization technology provides enterprises the fastest and simplest path to become cloud-native quickly and effectively.

The company is headquartered in San Francisco, California, and partners with leading cloud service providers and database vendors including Amazon Web Services, Google, Cloud Platform, Microsoft, Pivotal, and ­­­­Snowflake among others.

 

Tell us about your early career before founding Datometry. What attracted you to database research and commercial database development?

 

As a teenager, and while still in high school, I developed commercial software for payroll and tax processing. Effectively starting out in the enterprise software development space at age 15, I had my very first brush with data management and query processing and found the technical challenge of extracting useful and actionable information from data compelling.

During college, I became extremely interested in database research while working on combinatorial optimization problems in the context of query optimization. These are computationally hard optimization problems that database systems need to solve for every submitted database query, and, while there are many computational hard optimization problems in computer science literature, query optimization is, in my opinion, the one that has the highest practical relevance and importance for enterprises.

During the course of my Ph.D., I worked as a researcher for several years at some of the top European research consortia on the theoretical underpinnings of optimization for parallel databases, but I was always drawn to system architecture and the actual development of database software. What deeply interests me about database development is that it combines incredibly hard engineering challenges with enormous practical relevance.

 

 

Which one of these experiences was foundational – which one of them prepared you to start your own company and be a CEO?

The desire to start my own company was fueled by my realization that large companies simply cannot innovate with the velocity or agility of a start-up, and starting your own company is one of the most effective ways to bring radical ideas. That said, I do want to acknowledge that working at large companies has been an outstanding learning experience for me: I have had the privilege to work with some of the sharpest minds in the industry and I value the learnings from their tutelage to this day.

For a start-up to create technology that is truly disruptive, the vision of the founder or founders must be ahead of its time, and, for outsiders – quite unthinkable or unbelievable. During the course of my career, I have encountered a fair number of naysayers who have questioned the feasibility of some of my more innovative and ahead-of-their-time engineering ideas in database science. I believe this—for me—has been the single most helpful preparation in starting Datometry and becoming a CEO.

As a CEO, I find that a key challenge is remaining steadfast in many situations and resisting the naysayers, and I have also found that the more advanced the ideas, the more stamina and focus on mission is required. At the same time, I do believe that it is very important to listen to your critics, understand the disconnect, and avoid becoming your own echo chamber.

It is extremely gratifying that the Datometry team and I have been successful in undertaking and delivering on an engineering feat that many in the database industry and academia have said was impossible to accomplish. I look forward to solving more technological challenges in database science that conventional wisdom says are not solvable.

 

What were your initial goals for Datometry? Would you say that today you’ve managed to achieve most of them?

 

Our initial goal was to demonstrate the technical feasibility of our data warehouse virtualization technology. I knew that the moment we could successfully demonstrate feasibility, market forces would kick in, and we would be in the position to take Datometry to the next level. One of my key strategies in accomplishing this goal has been to build a one-of-a-kind team of outstanding subject matter experts. Most of our engineers hold Ph.Ds in database research and have developed database technology at leading companies, such as Amazon, Microsoft, Pivotal, Oracle, and Google before joining Datometry. We are also fortunate to have some key, much-recognized industry and academic database experts as technical advisors.

 

The Datometry team has built out our virtualization technology in record time—just under a couple of years—and, earlier this year, we started running proof-of-concept (POC) implementations with some of the largest enterprises in the US. The fact that Fortune 500 enterprises are engaging in POCs with Datometry—an early-stage start-up—speaks volumes about the importance and urgency of the problem we are solving, as well as the promise our technology brings to the table. We offer enterprises a unique solution that fundamentally changes their data management going forward as they look to implement cloud-first strategies to gain competitive advantage.

 

I am proud that we accomplished our initial goals so resoundingly which resulted in our oversubscribed Series A financing. Now we in the next stage of rewriting the enterprise data management story.

 

 

What further goals are you working towards with the company and what’s your vision for the future of its services?

 

I consider what we have accomplished so far is just the beginning. At Datometry, we envision a world in which enterprise database customers do no longer connect applications and databases directly.

 

We are convinced that the data warehouse virtualization technology pioneered by Datometry will become the management and control plane that connects all of the enterprise applications with the underlying data processors. This is a radical depature from the status quo and turns traditional enterprise database IT on its head: instead of configuring applications for a given database, administrators in the future will not have to worry about compatibility of new data warehouse but simply configure what makes the most sense from a business economics perspective. This takes interoperability of applications and data warehouses to a whole new level.

 

We have seen similar trends in network virtualization and other areas of the IT stack where, once the connection between two otherwise tightly bonded components is shattered by virtualization, an entire ecosystem of new solutions—and companies—springs up.

 

We are in the process of scaling the company to address rapidly increasing demand and engagement we are seeing with Global 2000 companies in North America, Europe, and Asia-Pacific.

 

 

What challenges would you say you and Datometry encounter on a regular basis? How are these resolved?

The enterprise data management industry has grappled for a very long time with the challenge of truly virtualizing a database or a data warehouse. And, it is because of these challenges, when we present our data warehouse virtualization technology, there is a fair amount of disbelief that we are able to run any application on any database without first rewriting the application.

The idea of virtualizing data management is so challenging that the only way customers can believe the technology is available is to have a POC. Therefore, our proof point is running POCs in customer labs. When customers see the ease and speed with which our technology allows existing applications to be run on a cloud-native data warehouse, there is instant excitement that we have broken a key barrier to adopting cloud-first technologies quickly, and at a huge savings to the enterprise.

Illustrating the above point is a great success story of our very first POC with a Global Fortune 100 retailer. The retailer was looking to move their very large, custom business intelligence application with close to 40 million application queries executed per week to Microsoft Azure SQL DW. Their own testing and POCs had already found that to rewrite the approximately 40 million queries for the new cloud data warehouse would be a multi-year project with costs running in the tens-of-millions of dollars. Our POC was able to demonstrate that Datometry could enable the migration to the new data warehouse within 12 weeks. As you can imagine, they were blown away.
How have you been able to solve the technological challenge of virtualizing data warehouses—a problem that has been pretty much unsolvable until now?

The credit would have to go to the exceptionally talented Datometry engineering team. As I mentioned earlier, our engineers come from industry powerhouses—Google, Oracle, Microsoft, Amazon, and Pivotal—with experience in building critical infrastructure and innovative data management products both on-premise and in the cloud.

Besides holding doctorate degrees in database research and distributed systems, the Datometry engineering team collectively has over 40 domestic and international patents or active patent applications and over 70 peer-reviewed scientific publications to their name.

 

Can you briefly describe how Datometry’s Adaptive Data Warehouse Virtualization technology works?

Our flagship product, Datometry® Hyper-Q™, is a virtualization platform that works by intercepting the communication between the applications and a data warehouse, match toand redirecting that communication to an alternate database such as a cloud-native data warehouse. It receives application requests, translates them real-time to the language and protocol of the cloud-native data warehouse, and on the way back translates the results in real-time to feed them back into the application. This means the applications do not change at all and the applications do not even know that the data warehouse under them has changed.

Given that today, most public clouds have provisions for stateless servers to integrate with virtual IP, load balancers and so forth, Hyper-Q is able to leverage standard cloud components and configurations and enterprises have the benefit of a plug-and-play architecture.
 

Can you tell us a bit about the recent funding that Datometry received? What are you going to use the financing for?

Our $10 Million Series A funding round was led by the venture capital and growth equity fund Redline Capital with participation from Dell Technologies Capital and the venture capital firm Acorn Pacific Ventures.

We plan to use the funds to accelerate product development in enterprise data management technology and build the company to match our vision of connecting the world’s applications with data, independent of technology choices.
 

How are the challenges that Datometry faces set to change, in conjunction with the current fundamental transformation in global database market and the future needs of clients?

The global market dynamics for enterprise data management have completely shifted with the availability of cloud-native solutions, and this means moving to the cloud has become an imperative for enterprises. Interestingly, it has taken almost 10 years for the public cloud to gain traction with Fortune 2000 companies, and this traction can be attributed to the cloud service providers investing heavily in securing their platforms, fully accommodating privacy regulations. For enterprises, one of the triggers to adopt cloud technologies is software licences that are about to expire and upcoming renewals of very expensive large-scale hardware.

Analysts and industry watchers are predicting that within the next 5 to 10 years, the majority of the $40+ Billion database market will re-platform to the cloud. The trend started in 2014 as part of enterprise Business Intelligence modernization efforts and has accelerated significantly in recent quarters. Demand for data science and analytics is increasing, according to the leading industry group, TDWI, “today’s consensus says that the primary path to big data’s business value is through the use of so-called ‘advanced’ forms of analytics based on technologies for mining, predictions, statistics, and natural language processing (NLP). Each analytic technology has unique data requirements, and DWs must modernize to satisfy all of them.”

Enterprises looking to adopt cloud-native data warehouses and databases are finding that the cloud-native options are challenging all conventions: for example, they no longer require tedious tuning because scale and performace are easy to adjust, the newest version of software and hardware is always available, and cloud elasticity ensures significant CAPEX and OPEX savings.

What this means is that the cloud is turning data warehouses and databases into commodities and the new model is Pay-for-API instead of Pay-for-Technology. Every Fortune 500 company is formulating, if not already executing, a cloud-first strategy. The key question facing them is how to shift decades of on-premise data management to the cloud without the risk, expense, and time typically required for such migration projects.

We believe virtualizing the data warehouse is the cornerstone of any cloud-first strategy because data warehouse migration is one of the riskiest and most expensive initiatives that a company can undertake on their journey to adopting cloud-native data management.

Interestingly, the cost of migration is primarily the cost of process and not technology and this is where Datometry comes in with its data warehouse virtualization platform. We are the key that unlocks the power of new technology for enterprises to take advantage of the latest technology and gain competitive advantage.
 

In terms of market competition, where does Datometry stand globally? What makes the company unique?

We view Datometry to be a major disruptor to the current enterprise process of moving to the cloud which involves massive risk, expense, and time. So far, we have not seen any technology in the market that rivals or even comes close to Datometry’s Adaptive Data Warehouse Virtualization technology.

Using Datometry Hyper-Q enterprises can run any application on any data warehouse within days thus making databases interchangeable; enterprises have a greater choice in vendors and technologies; and, enterprises can eliminate the need for costly application migrations.

 

What is the Datometry Go-To-Market Strategy?

Datometry is building a global channel-based business with key technology and services partners in the space. All partners (Cloud Service Providers, database vendors, and System Integrators) benefit significantly from working with Datometry as we unlock substantial business opportunities for them. This makes for an ideal setup for reselling through partners and tight integration with their technology.

In working with cloud service providers and database vendors, we can offer enterprises short POCs (lasting a few weeks only) demonstrating the speed and ease of using our data warehouse virtualization technology, resulting in a fast deployment of cloud-native data management systems.

 

Where do you see Datometry in five years?

We believe that five years from now, no one will connect an application to a data warehouse or database directly, and our technology will become the ubiquitous data management fabric managing all communications between data warehouses and applications.

In addition, Datometry will become the standard for interoperability in the data space: enabling the rapid adoption of the latest data management technologies with the enterprise maintaining total control of its data, its most valuable asset.
Any final thoughts?

Based on the traction we have seen with Global Fortune 500 enterprises in the last six months, it is clear that we have created a technology category with massive potential. Virtually, every enterprise on the planet is facing data management re-platforming problems, which our data warehouse virtualization technology resolves: for enterprises, the benefits of the cloud have become too obvious to be ignored and the penalty for not adopting cloud-native technology is become too large.

In many ways, I like to think of Datometry’s potential along the lines of VMware that created and defined a category that seemed obscure at first but proofed it to revolutionize IT forever.

 

Website: www.datometry.com

Contact: press@datometry.com

 

 

Leave A Reply