image/svg+xml

The GenSynth Platform:
Quickly Build Enterprise AI You Can Trust

Slash months off development time while improving model performance and gaining unparalleled insights

Deep learning projects often suffer from lengthy cycles and high costs due to the manual, iterative—and often guesswork-driven—nature of key processes.  Plus, challenges are often compounded by the difficulty of identifying errors, biases, data problems and other issues, and of validating the trustworthiness of the deep learning solution being built.

DarwinAI’s software platform, GenSynth, equips developers and data scientists with a unique toolset to accelerate the deep learning development cycle in a trusted and transparent way.

Whether you’re building AI for the cloud or the edge, GenSynth can meet the most demanding requirements for your use cases.

GenSynth Platform Overview

GenSynth Platform

See how GenSynth Platform helps you quickly build deep learning models with more transparency.

Explore the topics below to:

What GenSynth Does

Developing and operationalizing enterprise AI is a complex and expensive undertaking, characterized by vast amounts of data, manual processes, high computing costs, lack of understanding and low trust.

GenSynth allows you to reach operational requirements for each of your AI use cases faster—with less guesswork and more trust.

How GenSynth Works

GenSynth—short for Generative Synthesis—employs explainable AI to obtain a deep understanding of a deep learning model. The platform leverages this understanding to:

  • Automatically generate new models which meet performance targets (e.g., accuracy) within operating restrictions (e.g., Parameters, Size, FLOPs)—reducing development timelines from months down to days
  • Provide transparency at every stage of the development process, from revealing bottlenecks in models, to visualizing performance comparisons of different experiments, to identifying errors, biases and issues in both models and data
  • Reveal critical factors which cause the model to make decisions—showing why decisions are made, unearthing hidden bias, and enabling more effective and efficient audits

GenSynth produces ready-to-use, high-performance and trustworthy models, any of which can then be deployed, modified and integrated into your ML Ops environment.

GenSynth is the byproduct of years of scholarship at the University of Waterloo (see the seminal paper) and improves upon the shortcomings of existing explainability techniques by:

  • Capturing the inextricable link between data and models: in the words of our Chief Scientist, “There is no data understanding without model understanding and no model understanding without data understanding.”
  • Accurately explaining the critical factors that lead the model to make each decision: that is, in the absence of these critical factors, the prediction being made appreciably changes
  • Quantitatively explaining the way the model makes a decision: the algorithm produces meaningful and actionable outputs which developers can use to make their models better
  • Reflecting model intuition direction regardless of how it ‘reflects back’ to us: the process articulates model reasoning authentically as human intuition can differ significantly from model intuition

In essence, GenSynth’s proprietary explainable AI (XAI) garners an intricate understanding of a model’s inner workings, thus allowing it to better explain the model’s behavior. Put another way, GenSynth uses XAI technology to obtain—and provide to developers and engineers—a direct and global understanding of the model’s decision-making process.

The benefits of this approach are twofold:

How to Use GenSynth

In traditional MLOps, more than 50% of development time is spent pursuing accuracy and operational targets by tuning the architecture, retraining models and validating performance and correctness—GenSynth automates and accelerates these operations without sacrificing results.

Let’s see how!

Step 1: Define Operational Targets

The very first step is to define the performance targets and operational constraints that characterize the desired use case (e.g., size, computational complexity, accuracy, etc.).

Step 2: Collect and Prepare Data

To begin the deep learning development process itself, you collect and prepare your dataset as you normally would.

Step 3: Design Model Prototype

Next, you construct a prototype based on human-driven design principles and best practices. Essentially, the prototype provides the initial scaffolding of the model while leaving final macro-architecture and micro-architecture decisions to the machine-driven aspect of the process.

So far, everything has followed your normal process—but that’s about to change.

Step 4: Explore and Optimize Design with Generative Synthesis

This stage automatically produces a collection of models which meet the needs of your specific use case and takes only a few minutes to configure.

To get started with machine-driven design exploration, you provide GenSynth with your dataset, model prototype, and use case parameters (i.e., performance and operational targets).

Use the Dataset Manager to inform GenSynth how to access your underlying data. The built-in Python script editor makes it quick and easy.

Next, use the Model Manager to set up the model entity, specifying input tensors, output tensors, loss tensors, and accuracy metrics.

Finally, you create a new Job. In doing so, you’ll use the straightforward interface to: specify your entities (choosing from ones you’ve already created, or selecting from GenSynth’s pre-created model prototypes and dataset entities); specify your model tensors; define the learn, build, and explain parameters; and provide your use case parameters.

With all the information in place, GenSynth automatically generates and iterates models which meet your performance targets within your operational constraints.

Importantly, GenSynth allows you not only to leverage multiple GPUs, but also multiple machines—accelerating jobs and providing an easy way to manage compute resources shared amongst a team.

The Jobs screen allows us to manage each experiment and includes collaborative features to help developers work within and across teams. By examining the details and trade-offs of the different models GenSynth generated, you can choose the one that works best for you (and maybe one is best for cloud, one for edge, etc.).

GenSynth provides tremendous transparency, letting you do a full comparative analysis between all of the models it generated by showing the FLOPs, Channels, Parameter distribution, etc. and highlighting bottlenecks.

You can dive deeply into each model to examine performance tradeoffs—this example shows the tradeoffs between accuracy and FLOPs (and you can even compare multiple models in the same graph).

At this point you simply click on any of these models to generate new ones or to download the model to further modify, extend, integrate etc.

Validate via Explainability

When building a precise and robust neural network it’s important to recognize that output alone is insufficient to communicate the model’s strengths and weaknesses. The opaque nature of deep learning, which is being increasingly scrutinized as AI becomes pervasive, is akin to trying to debug a classical computer program without the source code.

This limitation is a key reason why design audits are frequently omitted from DL workflows, as the alternatives to XAI-based assessments are cumbersome, time-consuming and often involve scripts, interpretations and considerable manual effort. Moreover, they aren’t especially effective, most notably for unusual and non-intuitive cases.

However, investing the time to audit your model can dramatically accelerate and simplify development: identifying the gaps in the design and the underlying factors behind them greatly increases your ability to design effectively and can avoid considerable pain and debugging down the road. Moreover, the insights gained through explainability can not only be used to generate better networks, but can also illustrate why they reach particular conclusions.

GenSynth provides you with unparalleled visibility into how your model makes decisions, allowing you to quickly understand error scenarios and to verify that correct decisions are being made for the right reasons.

Like many tools, GenSynth shows you the confusion matrix. Unlike other tools, GenSynth goes far beyond what a traditional confusion matrix can tell you.

Through GenSynth's example-driven explainability capabilities, clicking on any element in the confusion matrix retrieves all data samples sharing that same decision scenario and allows you to gain new insights into the commonalities and trends in these related data samples. In addition, GenSynth shows you the critical factors that led to each decision—as determined by GenSynth’s proprietary attention-driven explainability capabilities.

Diving even more deeply, you can even see the quantified impact of the critical factor through GenSynth's counterfactual-driven explainability capabilities. In this example, if the highlighted section was not leveraged by the model to make decisions, then the model would have incorrectly identified this image as a printer instead of an iPod.

Sometimes, investigating the reasons for a decision will reveal problems with data annotation. In this example, it was revealed using GenSynth's scenario-driven explainability capabilities that the model identified pedestrians who weren’t labeled as such in the original dataset—quickly identifying and understanding this false error scenario saves an enormous amount of audit time.

Explain and Understand the Model Architecture

Crucially, GenSynth allows you to look at the full network visualization to get a better understanding. Instead of just showing connections, the platform also shows you the importance and performance analytics for each component of the neural network—providing you with even deeper insights into how the model works and of the performance trade-offs.

Step 5: Deploy to Hardware

Finally, you deploy your model to hardware—weeks or even months earlier than you could under the traditional model development process.

Technical Compatibility

The GenSynth platform is accessed through a web interface and can be deployed on a public cloud, private cloud, or local workstation (on premises).

GenSynth seamlessly fits into your existing development stack, supporting:

  • Popular deployment and development frameworks, including TensorFlow and Keras
  • Any trained or untrained convolutional neural network (CNN) or multilayer perceptron (MLP) architecture
  • Any data, including image, video, audio, text, tabular, and time-series
  • Any target hardware for deployment (e.g., GPU, CPU, microcontroller, FPGA), whether in the cloud (e.g., AWS, Azure, Google Cloud) or on the edge
  • Hardware acceleration platforms (e.g., Nvidia TensorRT, Intel, OpenVino, nGraph, Arm NN, Xilinx Vitis, etc.)

GenSynth FAQs

How does GenSynth choose what kind of paradigm to use?

As the user, you provide a model prototype and then GenSynth iterates and optimizes based upon your operational targets and parameters.

Are the generated models just optimized versions of my initial model?

GenSynth learns from the model prototype and data you provide, and generates models with different macro- and micro-architectures as it explores new models. Models generated using GenSynth Edge mode, in particular, tend to differ significantly from the prototype to tailor the model to operational constraints.

What does GenSynth produce?

GenSynth produces ready-to-go models in the form of computational graphs (e.g., for Keras or TensorFlow) which you can plug into your existing MLOps pipeline to integrate, modify, or extend as you want.

Does GenSynth have ways to address imbalanced data?

GenSynth provides a number of tools for you to handle imbalanced data, including full customizability for things like weighted losses, batch balancing, etc. We also have some exciting new features in the works to further assist you

How do you ensure GenSynth doesn’t miss the best model?

There is no way to conclusively determine if a particular model is the theoretically “best” model; however, the generative learning and synthesis at the heart of GenSynth provides an efficient and effective means of exploring different options and tradeoffs in pursuit of optimized models.

Can GenSynth generate models with different architectures?

Yes, GenSynth inherently generates models with varying macro- and micro-architectures.

Is GenSynth superior to a neural architecture search?

Perhaps the best way to answer this question is to say that GenSynth takes a very different approach that is much more computationally efficient and cost-effective.

Is GenSynth leveraging reinforcement learning?

No, GenSynth relies on Generative Synthesis, which is based on an interplay between a Generator and an Inquisitor (see the seminal paper).

How do I get GenSynth on my cloud?

GenSynth is provided as a containerized platform for easy deployment.

What makes GenSynth different?

The most significant differentiator for GenSynth is that it takes a generative approach to explainable AI, which enables it not only to understand the inner workings of models for providing transparency but also to automatically generate new models based on this understanding. Alternative explainable AI strategies tend to be proxy approaches that try to probe a black box to provide explanations, but these don’t have any understanding of the actual process by which the model works. Because GenSynth learns the intrinsic, internal properties, it’s able to gain a much deeper understanding into how your model makes decisions. Furthermore, by taking a generative approach, GenSynth also allows for much greater flexibility to produce more efficient and higher-accuracy models than more restrictive alternatives for improving model efficiency (such as pruning), while also being orders of magnitude faster and providing more control over the results than neural architecture search.