📄
RESUME
💼
LINKEDIN
📸
INSTAGRAM
💻
GITHUB
0
HOME

NILA TV

Welcome to my portfolio channel! Use the remote or click the menu button to explore my projects.

RAG‑Enhanced Moderation Dashboard

Build a content moderation platform that uses RAG (Retrieval‑Augmented Generation) to justify why a piece of media is flagged as inappropriate. Instead of a "black box" flagging system, reviewers receive grounded, explainable AI insights with traceable evidence.

Tech Stack

  • LLM & RAG: LangChain + OpenAI
  • Vector Store: FAISS (dev) / Weaviate (prod)
  • Embeddings: CLIP for visual content, OpenAI for text/policy content
  • Backend: FastAPI
  • Frontend: React.js (reviewer dashboard with frame scrubber)
  • Database/Storage: Supabase (Postgres + Object Storage for frames and metadata)

Key Learnings

  • How to use vector similarity + context retrieval to ground LLM responses.
  • How to evaluate factuality & traceability of moderation explanations.
  • How to tune similarity thresholds to balance precision & recall in moderation tasks.
  • How to measure model performance against labeled datasets (PR curves, F1 scoring).

Example Use Case

A sports clip is flagged for "violence." The system: Extracts keyframes → CLIP embeddings → retrieves similar UFC‑style flagged frames. Retrieves relevant policy text (violence category). LLM generates: "This frame shows physical altercations similar to previous UFC clips flagged for violence." Output includes evidence links for human verification.

LLM Evaluation Agent for SEO Answer Auditing

Develop an automated LLM evaluation agent that continuously queries AI search engines (ChatGPT, Claude, Perplexity) and evaluates brand visibility, truthfulness, and answer drift over time. This allows businesses to track their "share‑of‑voice" in AI‑generated answers and detect hallucinations or content shifts.

Tech Stack

  • Agents & Automation: LangChain agents for scheduled querying
  • Evaluation: LLM‑as‑a‑Judge (GPT‑4) for semantic comparison & truthfulness scoring
  • Database: PostgreSQL for snapshot storage
  • Vector Search: Pinecone / Qdrant for historical answer similarity
  • Dashboarding: Streamlit or Dash for time‑series analytics

Key Learnings

  • How to build automated evaluation pipelines for LLM answers at scale.
  • Techniques for semantic diffing of AI‑generated answers across time.
  • Prompt engineering for LLM‑as‑a‑Critic workflows (hallucination detection, entity presence).
  • Strategies for tracking share‑of‑voice for brands in AI search engines.

Example Use Case

Query: "What's the best CRM for small businesses?" Agent queries Perplexity, ChatGPT, Claude and stores snapshots. Weekly evaluation shows: HubSpot mentioned in 80% of responses (up 15% WoW), Salesforce visibility dropped to 30%, Change correlated with Perplexity shifting to newer web sources.

FitnessPal + AI Agent for Personalized Meal Planning

Build a FitnessPal‑style macro tracker with an AI agent that generates daily meal plans based on a user's macro goals, food logs, and dietary constraints. This project integrates traditional backend development with LLM agent tool‑calling, creating an intelligent fitness assistant.

Tech Stack

  • Backend: FastAPI (Auth, Food Logs, Macro Goals, Agent API)
  • Database: Supabase (Postgres)
  • Frontend: React/Next.js (meal log & AI plan viewer)
  • Agent: OpenAI LLM with tool‑calling for food search and plan generation
  • Auth: JWT tokens; optional OAuth for agent authorization

Key Learnings

  • Designing secure agent tool‑calling APIs with scope‑limited tokens.
  • Structuring nutrition databases to allow macro computation and query efficiency.
  • Building LLM‑driven meal plans that respect real macro goals (±5% tolerance).
  • Laying groundwork for future AI coaching features (critique, swaps, recipe embeddings).

Example Use Case

User logs breakfast and sets macro goal: 120g protein / 60g carbs / 40g fat. Agent queries: get_goals(), get_logs(date), search_foods(). Agent returns: "For dinner, grilled chicken with quinoa and broccoli meets your macro target within ±5%."

About Nila Karthikesan

Full-stack developer specializing in LLM evaluation, vector search/RAG, and AI agent integration. Three end‑to‑end projects that demonstrate expertise in modern web development.

Skills

  • Full-Stack Development: React, Next.js, FastAPI, Python
  • AI & LLMs: OpenAI, LangChain, Vector Search, RAG
  • Databases: PostgreSQL, Supabase, Vector Stores
  • DevOps: Docker, CI/CD, Cloud Deployment
  • Evaluation: LLM-as-a-Judge, Performance Metrics

Contact & Education

Nila Karthikesan
240-408-2114 | nilakarthikesan@gmail.com
linkedin.com/in/nila-karthikesan | github.com/nilakarthikesan

University of Maryland, College Park
B.S. in Computer Science, Minor in Engineering Technology
Aug. 2020 – May 2024

Society of Women Engineers
Director of Engineering 2021 – 2024
• Created over 50 functional projects, distributed weekly, implementing concepts from lectures on JS/React/Node/CSS, Event-Handling, Testing, Rest APIs, Backend Services, Cloud Native Development.

Professional Experience

Software Engineer | GEICO | New York, NY | May 2024 - Present

  • Led development for full-stack application managing GEICO's server database repository, providing access to 100,000+ server records for 500+ engineers and IT staff
  • Implemented frontend using React and the backend using a GraphQL API, reducing server query time by 40%
  • Migrated scripts Power BI to PostgreSQL, achieving a 50% reduction in data processing time in Python

Software Engineering Fellow | Palantir Technologies | New York, NY | Dec. 2023 – Jan. 2024

  • Developed a backend for a proxy service handling 10,000+ IPs & 100+ monthly users on an AI platform
  • Improved project load time by 80% (from minutes to seconds) using Docker & Kubernetes containers

Software Engineering Intern | Comcast | Washington, DC | May 2023 – Aug 2023

  • Developed a full-stack platform for content moderation using React.js, Flask, and Open AI's Clip LLM leading to a 15% increase in efficiency in determining inappropriate content in Comcast's media
  • Implemented containerizing microservices architecture with Docker, Kubernetes improving scalability 40%
  • Enhanced the data pipeline by integrating Elasticsearch, resulting in an 8x reduction in data retrieval times and a 20% increase in user satisfaction through faster and more accurate content flagging
  • Integrated AWS services such as EC2, S3, and RDS to handle large-scale media data storage and processing

Software Engineering Intern | Delaware INBRE | Wilmington, Delaware | May 2022 – Aug 2022

  • Led the development of a full-stack application for nanoparticle analysis, used by 100+ researchers, improving data processing speed by 60% and reducing manual effort by 30 weekly hours
  • Automated workflows using Pandas, NumPy, Airflow reducing errors by 50%, increasing productivity 25%
  • Deployed the application on AWS, ensuring 99.9% uptime and reducing operational costs by 15%
  • Implemented REST APIs for data retrieving nanoparticle optical data from experimental datasets and computational simulations reducing response times by 35% and enhancing tool integration

Technical Skills

Languages: Java, Python, C/C++, SQL (Postgres), JavaScript, HTML/CSS, R

Frameworks: React, Node.js, Flask, JUnit, WordPress, Material-UI, FastAPI, GraphQL

Developer Tools: Git, Docker, TravisCI, Google Cloud Platform, VS Code, Visual Studio, PyCharm, IntelliJ, Eclipse

Libraries: pandas, NumPy, Matplotlib

Philosophy

Don't you think the web should be fun again? Buttons and knobs and sounds and experimentation? We have an amazing medium here, to be able to express and do whatever we want. Let's do it!

CH: 0
HOME
0
HOME
Welcome to NILA TV
1
RAG MODERATION
Content moderation with explainable AI
2
LLM SEO AUDIT
Automated evaluation of AI search engines
3
FITNESSPAL AI
AI-powered meal planning assistant
4
ABOUT NILA
Skills, experience, and philosophy