We value your privacy. We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept All", you consent to our use of cookies. Read our Privacy Policy for more information.
back arrow icon

Integrating generative AI in Valcori for smarter tender management

Friday, March 24, 2023
Stefanos Peros
Software engineer

Generative AI is here to stay, and is set to revolutionise digital products in a fundamental way. One key principle remains standing: you need to think from the end-user experience backwards to the technology. That's exactly how we approached a recent brainstorming session for Valcori, a procurement platform for medium-sized businesses to create tenders and streamline the supplier selection process.

On average, 80% of the tendering process is repetitive manual work that is prone to impactful errors. We asked ourselves how Large Language Models (i.e. ChatGPT) could assist our users to save time and improve overall tender quality.

The plan: replacing a sizeable part of the heavy lifting for our users when creating a new tender through the power of AI, with minimal manual data input

The result: an intuitive user flow to instruct the LLM to return an elaborated set of questions and parameters that fit neatly in Valcori's tender structure

The timeline: just 3 weeks to go from plan to implementation in production of the MVP, saving users 55% on the usual time spent on tender creations

The proof of the pudding is in the eating, let's demonstrate a concrete example:

Need a maintenance service provider for your elevators over the next three years? Valcori will generate all the crucial questions to ask your potential suppliers.

ChatGPT integration in Valcori, a B2B tender management platform
ChatGPT integration in Valcori, a B2B tender management platform

Link to demo video: https://youtu.be/UhpO3-XNY1w

Overcoming ChatGPT's creative tendencies

During the implementation, we discovered that ChatGPT occasionally gets a bit too creative and returns invalid responses, even when the prompt is consistent. To address this, we implemented Triple Modular Redundancy for our production environments. We send three parallel requests to ChatGPT, and return the first valid response to the client (product backend). While this increases costs, the current pricing structure of the OpenAI API makes it feasible. As AI models improve and with chatGPT-4 being rolled out, we expect such redundancy measures will become obsolete.

A dedicated microservice

The process described above is handled by our serverless microservice that converts JSON input from the backend into a ChatGPT prompt using product-specific templates. The ChatGPT response is then converted into JSON and sent back to the backend. This technical setup was an easy fit into our existing infrastructure with Google Cloud Platform.

Broadening use cases

We see the use case described here above only as the starting point. The possible use cases in procurement workflows are infinite, both for buyers and suppliers. In the end it's in everyone's interest to find the most optimal path towards beneficial procurement outcomes for all parties involved!

Power up your digital product with generative AI

Are you interested to discover how genAI can unlock new opportunities in your digital product? Reach out to our team (hello@panenco.com)

Let's build!

Are you looking for an entrepreneurial digital partner?
Reach out to hello@panenco.com or schedule a call

Egwin Avau
Founding CEO
Koen Verschooten
Operations manager

Subscribe to our newsletter

Quarterly hand-picked company updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.