QX Lab AI launched the world’s first node-based, hybrid Gen AI platform ‘Ask QX’ in 12 Indian and over 100 global languages on February 2, 2024, in Dubai. Businessline spoke to the co-founders of QX Lab AI just before the launch to get an insight into an Indian foray in generative artificial intelligence (Gen AI) with some unique characteristics. Participating in the discussion were Tilakraj Parmar (CEO), Arjun Prasad (Chief Strategy Officer), and Tathagat Prakash (Chief Scientist). Excerpts:


While there has been a great deal of noise around Gen AI in the last 12 months following OpenAI’s launch of ChatGPT, you have come out of the blue. Can you tell us about your journey and the product.

Tilakraj Parmar: We have been working on the product for nearly eight years now. We are one of the first to build a product on a hybrid, node-based architecture. Our product is about 30 per cent GPT-based and the rest is a unique blend.

We are launching with 100+ of the planned 300 global languages and 12 Indian languages. This involved a significant effort to train and tune the programme just as the architecture and model-building.

Arjun Prasad: We are 3 Indian founders, bootstrapped and working for eight years on tech and then launching only when we were ready for the customer. Our strategy is to go for the B2C market and empower everyone with the power of AI.

Tilakraj: We have trained around 372 billion parameters which is roughly 6 trillion tokens — one token is roughly 750 words. Currently we are in the beta phase of web version that was launched in January.

We have almost 8 million users now, which is a big number for us because we were not expecting this kind of a response. We have had somewhere around 60 million prompts — questions that people ask on the platform.

So the goal is to not just focus on the English language. This is where we differ from the rest of the crowd. We want to get into the regional language market where we want people to think and write and create in their own language.


Do you allow only text-based interaction or will we have voice input as well? Also, new developments from OpenAI allows for the creation of one’s own GPT with specific inputs like spreadsheets and other structured documents. Does QX allow that?

Tilakraj: On the web, we don’t have voice input yet, but on the app that we’ve just launched, it is present. In our voice input feature you can ask questions, but the answer will come in text as of now. Voice response will be coming soon.

Arjun: The multimodal product is in alpha phase. It’s in our roadmap and we will release it at some point in the year. We are one of the very few companies that actually believe in delivering the tech-first and then talking about it. It’s in alpha phase right now and we want it to be fine-tuned to make sure it’s a beautiful product.

We will soon have the text-to-image, text-to-voice, voice-to-text options. We are also working on ensuring that the usability — our UI, UX — is user-friendly.


Given that a big promise — and I am sure, a big challenge — is the Indian languages’ capabilities of the system, can you tell us how you went about building it?

Tilakraj: The team’s Indian origin had a huge impact on the platform’s language features, particularly in terms of cultural understanding and linguistic diversity. We decided not to work with translations but with original material in the given language. While it was a challenge to find so much digitised material in Indian languages — other than the top 6 or 7 languages. It wasn’t just literature that we needed. We need current, on-going material from all areas as well as in languages.

Tathagat Prakash: To address this challenge, we looked towards synthetic data. Over a period of time this synthetic data will be replaced by actual data. But the synthetic data would have done its job — of not making the user feel like they are dealing with a translation.

Challenges include diverse structures, scripts, phonetics, scarcity of quality datasets, capturing contextual nuances, integrating cultural context, resource constraints for less common languages, orthographic disambiguation, speech recognition challenges, semantic ambiguities and the need for efficient algorithms in various technical environments.


Some recent developments suggest that guardrails around AI are not very strong. Have you done anything about this concern in QX?

Tilakraj: We use contextual inferencing and we have also put “ethical dilemma” in it, where if somebody puts in a query about some famous movie stars or great personalities from India or anywhere one else in the world, the program will stop. “Is it necessary to give that answer?” These are few things that we’ve worked very hard on. Before launching the product, we wanted to be very sure about contextual inferencing and the ethical dilemmas it brings in so that you know the product doesn’t affect the end user adversely.