Cloud-native CAD will disrupt the PLM platform paradigm

An illuminating blog post by Onshape engineering team member Ilya Baran revealed some fundamentals of how the new cloud-native CAD system works: “We are careful to distinguish several types of data: the User Interface (UI) state—for example, selection, camera view, current tab; the Part Studio definition—for example, feature list, part names and colors, import data; [and] Regeneration results—the “b-rep” (bodies, faces, edges and so on), triangles for display, regen errors.”

How do these data types differ? “The UI state generally doesn’t persist (except for things like named views),” Baran wrote. “The regeneration results are cached, but they can always be rebuilt from the definition. The Part Studio definition is what we store in the database, and that is where collaborative editing happens.”

Then Baran explained something that begins to suggest why we believe Onshape is not only a breakthrough in CAD, but also poised to disrupt the established paradigm for PLM platforms. “For a given Part Studio, at each point in time, the definition is stored as an eternal, immutable object that we internally call a microversion,” he wrote. “Whenever the user changes the Part Studio definition (for example, edits an extrude length, renames a part, or drags a sketch), we do not change an existing microversion, but create a new one to represent this new definition. The new microversion stores a reference to the previous (parent) microversion and the actual definition change. In this way, we store the entire evolution of the Document; this is accessible to the user as the Document history, allowing the user to reliably view and restore any prior state of an Onshape Document.”

Next, Baran revealed how Onshape is fundamentally different from older-generation engineering software. “Basing Onshape on immutable microversions also makes for a great foundation for other collaboration tools: those we already have, such as the Follow Mode or the Compare tool, as well as those we are developing for the future,” he said. “It also has benefits beyond just collaboration abilities: Because old microversions are never modified, data integrity is better preserved, and having a history of changes allows us to debug exactly how a Document came to be when a user has a problem or when we detect a problem through our logs.”

BruceJenkins_blog_Jan-2016_image1

New cloud-native database architecture changes everything

What makes all this possible and practical? Much of the answer lies in Onshape’s being built on MongoDB, one of the new “NoSQL” databases widely used in cloud-native applications, instead of any of the relational database management systems (RDBMS) used in most engineering applications until now.

“Relational databases were not designed to cope with the scale and agility challenges that face modern applications,” MongoDB said, “nor were they built to take advantage of the commodity storage and processing power available today.” MongoDB functions as back-end software for Craigslist, eBay, Foursquare, LinkedIn and many other massively deployed cloud-based services.

Besides being fast, scalable and designed to exploit cloud computing resources, NoSQL databases have a capability called “schema-on-read.” This allows data to be captured, stored and subsequently acted on with almost limitless freedom, without the application developer having to create a schema for the data in advance. Having to create such a schema as the first step in creating a database, a requirement of traditional RDBMS technology, is known as “schema-on-write.”

Joe Pasqua with MarkLogic, another NoSQL database provider, explained the benefits of schema-on-read: “For decades now, the database world has been oriented toward the schema-on-write approach. First you define your schema, then you write your data, then you read your data and it comes back in the schema you defined up-front. This approach is so deeply ingrained in our thinking that many people would ask, ‘How else would you do it?’ The answer is schema-on-read. Schema-on-read follows a different sequence—just load the data as is and apply your own lens to the data when you read it back out.”

What’s the advantage? “More and more these days, data is a shared asset among groups of people with differing roles and differing interests who want to get different insights from that data,” Pasqua said. “With schema-on-write, you have to think about all of these constituencies in advance and define a schema that has something for everyone, but isn’t a perfect fit for anyone. When you are talking about huge volumes of data, it just isn’t practical. With schema-on-read, you can present data in a schema that is adapted best to the queries being issued. You’re not stuck with a one-size-fits-all schema.”

But that’s not all. “One of the places where projects often go off the rails is when multiple datasets are being consolidated,” Pasqua continued. “With schema-on-write, you have to do an extensive data modeling job and develop an über-schema that covers all of the datasets that you care about. Then you have to think about whether your schema will handle the new datasets that you’ll inevitably want to add later. If you’re lucky enough to get through that process, Murphy will strike again and you’ll be asked to add, change or drop a column (or two or three). With schema-on-read, this upfront modeling exercise disappears.”

For all types of data and meta-data

Those underlying capabilities of Onshape’s database architecture—together with its ability to import, operate on and archive data from other engineering applications—begin to suggest the true scope and scale of the company’s long-term ambitions and vision. Indeed, it has made no secret of this. Around the time of Onshape’s public unveiling last year, a user posted in its online discussion forum: “Is Onshape intending to develop PLM eventually, or are they going to go the route of partners to provide that? I ask because Onshape is a database system with the correct platform to seemingly handle this functionality.”

In reply, Steve Hess from Onshape’s UX/PD team posted: “As you know Onshape was built with data management in mind. The data management features of Onshape are at the core of the product and will become more exposed as Onshape matures. In time, Onshape will be the system of record for all types of data and meta-data…The data stored in Onshape will be visible and accessible to your other enterprise systems.” (Our emphasis.)

Already, the ways in which Onshape lets multiple users work simultaneously on the same design serve to eliminate many problems that established PDM and PLM providers have spent years “solving”—and at the same time perpetuating, because of the database architectures their systems were built on. As Onshape founder and chairman Jon Hirschtick told us, “For starters we eliminate 50 to 60% of all the functions of traditional PDM—they simply have no role (copying files, managing directory structures) in our world.”

Far from being a throwaway line, we think Hirschtick’s phrase “for starters” is in dead earnest. To date, Onshape’s best-understood benefits are how it removes many of the headaches and costs of locally installed software, and of CAD collaboration and data management. But we believe its larger goal is to evolve a next-generation product development platform that “in time,” as Hess said, “will be the system of record for all types of data and meta-data.”

Onshape’s ability to do this is grounded in two key benefits of schema-on-read. First, it “gives you massive flexibility over how the data can be consumed,” explained Tom Deutsch, Solution CTO with IBM, and second, “your raw/atomic data can be stored for reference and consumption years into the future.” These position Onshape to extend its radical simplification of CAD collaboration and data management to more and more areas of PLM, where users have had enough of complexity and expense and are ready for something new.

Speak Your Mind

*