Integrating generative AI in Valcori for smarter tender management

Friday, March 24, 2023
Stefanos Peros
Software engineer
Ruben Opdenakker
Product manager

The generative AI wave is here to stay, and it's set to revolutionize the way we work and use digital products. At Panenco, we are passionate about harnessing the power of AI not by building AI models ourselves, but by cleverly integrating existing AI models into our product portfolio and adding direct value to our product users.

First product use case: Valcori

The proof is in the pudding! Our B2B tender management platform, Valcori, provided the ideal use case to start with. Users can now generate questions automatically via ChatGPT when creating tenders based on the product being tendered and the topics they deem important.

Need a maintenance service provider for your elevators over the next three years? Valcori will generate all the crucial questions to ask your potential suppliers. In 12 days we went from idea to launching our first AI-powered feature on Valcori by leveraging the possibilities of ChatGPT.

ChatGPT integration in Valcori, a B2B tender management platform

Link to demo video:

One microservice that serves them all

The entire process is handled by our serverless microservice that converts JSON input from the backend into a ChatGPT prompt using product-specific templates. The ChatGPT response is then converted into JSON and sent back to the backend. 

As we’ve done the setup of the microservice once, much of the complexity of integrating ChatGPT with other products is eliminated as these can also utilize the microservice by using their own template prompts. This ensures a quick integration with ChatGPT for use cases across our entire portfolio.

Microservice approach

Overcoming ChatGPT's creative tendencies

During implementation, we discovered that ChatGPT occasionally gets a bit too creative and returns invalid responses, even when the prompt is consistent. To address this, we implemented Triple Modular Redundancy for our production environments. We send three parallel requests to ChatGPT, and return the first valid response to the client (product backend). While this increases costs, the current pricing structure of the OpenAI API makes it feasible. As AI models improve and with chatGPT-4 being rolled out, we expect such redundancy measures will become obsolete.

Broadening use cases across our portfolio

With Valcori's success, we're actively exploring additional ChatGPT integrations throughout our product portfolio using this microservice. By refining the service and minimizing implementation logic in our products' backends, we're paving the way for rapid generative AI integration into any product.

Upgrade Your digital product with ChatGPT

Our current setup enables us to integrate generative AI into products within weeks. Interested in leveraging OpenAI's capabilities for your digital product? Reach out to our team – we'd love to share more about our innovative approach and jointly think about how to integrate generative AI into your product.

See also