Manugen AI - Agent Development Kit Hackathon with Google Cloud

figure image

Researchers today navigate an increasingly complex landscape: from securing funding and navigating ethical approvals to designing robust methodologies and managing ever-growing datasets, the path from hypothesis to discovery is fraught with logistical, technical, and interpersonal hurdles. Yet even once experiments conclude and results are compelling, a new challenge emerges—translating months, or sometimes years, of work into a concise, coherent, and publishable manuscript. Crafting such a paper demands mastery not only of scientific rigor but also of narrative flow, clarity of argument, and precise language, all under the pressure of journal deadlines and peer review. These dual demands often pull scientists in opposite directions: immersed in the nitty-gritty of data collection and analysis, they must then shift gears to adopt the wide-angle lens of storytelling, framing the big question, situating findings within a broader scholarly conversation, and articulating implications for future work. Moreover, time spent polishing figures, formatting references, and resolving reviewer comments is time diverted from deeper scientific inquiry—yet without this effort, the impact of the research remains unrealized, making writing not a mere afterthought but an integral and often underestimated component of the scientific endeavor.

A Real-World Test of Google-ADK

figure image

We saw the Devpost hackathon “Agent Development Kit Hackathon with Google Cloud” as an ideal proving ground for Google’s Agent Development Kit because it offered a fast-paced, collaborative setting in which to experiment with orchestrating specialized AI agents end-to-end. In a short amount of time, we could spin up retrieval, summarization, drafting, and revision agents, wire them together through Google ADK’s built-in state management, and immediately observe how small prompt tweaks or workflow adjustments affected overall output quality. The hackathon’s tight timeframe forced us to confront real-world integration and error-handling challenges—everything from passing context cleanly between agents to gracefully recovering from unexpected generation failures—while also giving us confidence that an agent-based architecture can dramatically streamline the scientific writing process.

What Inspired Us

figure image

From the outset, we were captivated by the promise of autonomous AI agents collaborating to tackle complex, multidisciplinary tasks. The Agent Development Kit Hackathon with Google Cloud challenged participants to “build autonomous multi-agent AI systems” capable of content creation, among other applications.

figure image

Recognizing that writing a rigorous scientific manuscript involves navigating vast literature, synthesizing nuanced insights, and maintaining a coherent narrative, we saw an opportunity to apply multi-agent orchestration to streamline and elevate the research-writing process. The hackathon’s emphasis on orchestrated agent interactions inspired us to ask: what if specialized agents—each expert in retrieval, summarization, drafting, and revision—could work in concert to produce a high-quality scientific paper?

What It Does

We built Manugen-AI, an agentic system that can take experimental results generated by humans, such as figures and rough descriptions of results, and generate a full, well-structured scientific manuscript. We built a UI that allows the user to upload figures, enter Markdown with a bullet list of rough instructions for the system, and draft sections incrementally. When uploading a figure, the system will use a specific agent that will interpret it by generating a title and description that the Results-section agent will use later to reference figures and explain results. When the user enters a URL to a source code file, like a .py Python file, the Methods-section agent will use a tool to download the file, learn how the method works, and provide an explanation by drafting the Methods section. The user can use Manugen-AI to go from results, figures, and rough instructions, to a fully drafted, and well-structured scientific manuscript draft in minutes, speeding up science communication significantly.

Check out our short demo video and GitHub repo!

How We Built It

figure image
  1. User Interface. We created a web interface which helps users write their paper content with the help of AI‐assistant actions. Under the hood, we used FastAPI to provide an API service layer to a Vue.js front‐end. This streamlined interface means that users can quickly get input on their work directly in a browser. It offers an intuitive writing environment that offers agentic input in the form of specific actions. Users can leverage one of many options, described in detail below. Once agents have taken actions on the content, users may analyze the results and make iterative improvements on the content.

  2. Agent Roles & Pipeline:

Manugen-AI enables you to produce high-quality agentic output for use with scientific manuscript generation.
Manugen-AI enables you to produce high-quality agentic output for use with scientific manuscript generation.

We used the Python version of ADK to define each agent’s behavior and orchestrate the workflow, tapping into its built-in support for asynchronous execution and state management.

Core Agents

figure image

The following agents are invoked as core functionality by the coordinator agent.

Extended Agents

figure image

The following agents are invoked as part of extended agents which add specific capabilities we thought would be helpful for the project.

Repos action demonstration

The Repos action provides users with a way to provide a repository link and draft an entire paper.
The ‘Repos’ action provides users with a way to provide a repository link and draft an entire paper.

Cites action demonstration

The Cites action provides users with a way to incorporate relevant citations to help strengthen and relate the paper content to other works.
The ‘Cites’ action provides users with a way to incorporate relevant citations to help strengthen and relate the paper content to other works.

What We Learned

figure image

Challenges We Faced

Conclusion and Future Directions

figure image

Participating in the Devpost hackathon was an exhilarating journey that validated the power of multi-agent AI in scientific writing. In a short amount of time, we witnessed firsthand how coordinated agents could accelerate literature review, draft coherent sections, explain figures, manage citations, and perform reviews with relevant updates. The collaborative hackathon environment pushed us to solve real-world integration challenges under tight deadlines, and the results exceeded our expectations—our prototype delivered draft manuscripts far more quickly than traditional workflows.

Looking ahead, we’re excited to iterate on this foundation: refining prompt strategies to further reduce factual errors, experimenting with additional sub-agents (e.g., for data analysis or ethical bias checks), and exploring integrations with more diverse data sources to support scientific writing. By continuously benchmarking against human-authored papers and expanding our agent toolkit, we aim to uncover the full potential of Google ADK for research writing. This hackathon was just the beginning, and we’re eager to push the boundaries of what autonomous AI collaborations can achieve in scholarly publishing!

Previous post
Initializing Pividori Lab