We built our app leveraging Generative AI Large Language Model - Gemini
We built our app leveraging Generative AI Large Language Model - Gemini
Technical Design Details:
TLDR: We leveraged the technology available today, including Generative AI, Large Language Models (LLMs) like Gemini to the best extent possible to "Eliminate Food Deserts on Planet Earth"
End to End workflow: Users reach out through any device by launching www.groceryhoppers.com which is our core website with wealth of information about Food Deserts and how our solution Grocery Hoppers help eliminate Food Deserts. Our "Interactive App" is heart of the implementation which involves Data Engine and Map Engine that operate on top of datasets we pulled from US Department of Agriculture. We have trained Gemini LLM on our data set to limit hallucinations and have confined scope of food deserts to get meaningful responses always. We leveraged Google Map API and Speech APIs to enable the App responds to both verbal and text commands and automatically plot counties, grocery stores, homes & route maps. We'll be happy to share source code upon request.
HTML/CSS Layer : Though the actual website is built using sites.google.com, the actual interactive App page is custom built with our own JavaScript, HTML, CSS to achieve the look and feel we designed. This includes custom icons for map markers, custom route maps, bus stops and more.
JavaScript & App Script: JavaScript layer is responsible to react to user actions like enabling microphone for voice commands, mute & unmute, providing text based commands and send button. The app script layer hosts the core backend algorithms we wrote, that involve data engine and map engine and methods built to call Gemini APIs, Speech APIs, Map APIs.
Data Engine: This is responsible to get data from US Department of Agriculture and convert that into Json format to feed that as input to Gemini LLM for tailored request and responses associated to food deserts in North Carolina. This response is fed into the Interactive App to have responses uttered back to speech api voice assistant and also displayed on the App screen for reading.
Map Engine: This is responsible to get data from US Department of Agriculture and convert that into Json format along with map marker into (Latitude and Longitude of associated data set) to feed that as input to Gemini LLM for tailored request and responses associated to food deserts in North Carolina. This response is fed into google Map APIs to plot the map markers on the UI.
Gemini API: We did code labs on Gemini API to get comfortable and understand how it works end to end and incorporated the functionality into our app.
MAP API: We learnt Google JavaScript Map API and built the code that we need to make it work for our needs (like adding custom markers, custom routes, marker icons, auto zooming based on data).
Speech API: We learnt Google Text to Speech API and Speech to Text API and wrote custom methods that get invoked based on the user actions from our app UI (like clicking on record button to dispatch voice commands, mute and unmute to listen responses from Gemini APIs).
Devices Supported: The application is designed to work seamlessly on all mobile devices, PCs & tablets, including but not limited to Windows, iOS & Android operating systems. You just need to use browser on any of the devices and navigate to www.groceryhoppers.com .