Big information is a sham. For many years now, we have been told that every company ought to conserve every last morsel of digital exhaust in some sort of database, lest management lose some competitive intelligence versus … a rival, or something.
There is simply one issue with huge data though: it’s beeping big.
Processing petabytes of information to produce organization insights is costly and time consuming. Worse, all that information hanging around paints a big, bright red target on the back of the business for each hacker group on the planet. Big information is pricey to keep, pricey to secure, and expensive to keep personal. And the result may not be all that much in the end after all– frequently, well-curated and chosen datasets can offer faster and much better insight than limitless amounts of raw data.
What should a business do? Well, they require a Tonic to ameliorate their huge data sins.
Tonic is a “synthetic data” platform that transforms raw information into more personal and manageable datasets functional by software engineers and business experts. Along the way, Tonic’s algorithms de-identifies the original data and produces synthetic however statistically similar datasets, which indicates that personal information isn’t shared insecurely.
For instance, an online shopping platform will have deal history on its consumers and what they purchased. Sharing that data with every engineer and analyst in the company is dangerous, since that purchase history could have personally determining details that no one without a need-to-know should have access to. Tonic could take that original payments information and change it into a new, smaller sized dataset with exactly the same statistical homes, but not tied to original customers. That way, an engineer could test their app or an expert could check their marketing campaign, all without setting off issues about personal privacy.
Synthetic data and other methods to manage the personal privacy of large datasets has amassed huge attention from investors in current months. We reported recently on Skyflow, which raised a round to use polymorphic file encryption to guarantee that workers only have access to the data they need and are blocked from accessing the rest. BigID takes a more overarching view of just tracking what information is where and who should have access to it (i.e. data governance) based upon regional personal privacy laws.
Tonic’s technique has the benefit of helping fix not just privacy problems, however also scalability obstacles as datasets get larger and larger in size. That combination has actually attracted the attention of financiers: today, the company revealed that it has raised $8 million in a Series A led by Glenn Solomon and Oren Yunger of GGV, the latter of whom will sign up with the company’s board.
The business was founded in 2018 by a quad of creators: CEO Ian Coe worked with COO Karl Hanson (they first fulfilled in intermediate school as well) and CTO Andrew Colombi while they were all operating at Palantir, and Coe also previously worked with the business’s head of engineering Adam Kamor while at Tableau. That training at some of the largest and most successful data facilities companies from the Valley types part of the product DNA for Tonic.