OptivalueTek · Project-0
AI assistant for a large retail bank
Branch and back-office teams needed one place to ask about credit policies, KYC steps, exceptions, and case updates in plain language. As technical lead, the job was to design how each part worked together so answers came from approved bank documents, respected who was allowed to see what, and could be audited later.
GenAI assistant services
The natural-language replies were implemented using LLM (large language model) calls. We wrapped those calls with fixed system prompts written with the bank so the model always behaved like an internal helper: short answers, no legal advice, and clear wording when something was out of scope.
Structured answers were implemented using JSON output from the model. That way the same response could fill a checklist or a case note draft in the bank’s workflow tool, because the app could read fields instead of guessing from free text.
RAG knowledge retrieval layer
Grounded answers were implemented using RAG (retrieval-augmented generation). First we turned policy PDFs, circulars, and SOPs into searchable chunks stored in a vector database. When a user asked a question, we fetched the closest matching chunks and only then asked the LLM to answer using that text.
That reduced “made up” policy text because the model had to lean on what was retrieved. Access control was applied at retrieval time so a user only saw chunks from libraries their role could read. When a regulator circular or internal policy pack changed, we ran a re-ingest job so new wording replaced old chunks instead of mixing silently.
LangChain and LangGraph orchestration
The step-by-step flow (rewrite the question → search the bank library → shrink long chunks → call the model) was implemented using LangChain so each step was a reusable building block and easier to test or swap later.
The branching paths (for example: “if retrieval is weak, ask for a clarifying question” or “if the ticket is high risk, pause for a human”) were implemented using LangGraph so the flow looked like a small state machine instead of nested if-statements. That made timeouts, retries, and observability simpler because each state could log what happened.
Agentic AI
Tool use (look up a ticket id, pull the latest policy version, draft a note) was implemented as a small agent loop with a hard cap on steps. The agent could not browse freely; it only called allow-listed APIs the bank already trusted.
Any step that could change a customer record or send a message outward stopped for human approval first. That way we got speed for read-only lookups while keeping write actions under human control.
Python and FastAPI
The orchestration APIs (receive chat, call retrieval, call the model, stream tokens back) were implemented using FastAPI in Python because the team could iterate quickly and use async calls to the vector store without blocking threads.
Batch jobs (re-indexing documents, offline evaluation runs) were also written in Python so the same code paths could be tested in notebooks and then scheduled as jobs. These services sat next to existing Java services and reused the bank’s login tokens and logging so operations did not get a separate, shadow stack.
OptivalueTek · Project-1
Core banking services for a large retail bank
Most of the time was spent with customer and account transaction flows: opening and servicing accounts, posting debits and credits, limits and holds, and the APIs that branch, mobile, and partner channels called day to day. As technical lead, the work was to keep a microservice architecture on Java and Spring Boot coherent—clear ownership per domain, safe releases, and traces that made sense when money moved.
Microservices & throughput
The transaction backbone was split into Spring Boot microservices so customer profiles, account balances and postings, and limit or hold rules each had their own deployable unit and database boundary. That made it easier to scale the hot paths (for example heavy posting windows) without dragging every consumer along.
Throughput and SLAs were handled by tightening API contracts, batching where the bank allowed it, and keeping MongoDB (and supporting stores) behind clear repository interfaces so we could tune indexes and write patterns without leaking storage details to callers.
AWS & Kafka pipelines
Runtime and data sat on AWS: services on EC2, relational data on RDS, artifacts and reports on S3, with IAM roles scoped so each service only reached what it needed. Network and bucket policies were treated as part of the design, not an afterthought, because the bank expected a clear blast radius.
Account and customer events (balance updates, status changes, notifications to downstream systems) were published on Kafka topics so readers could catch up or replay without calling the core APIs on every tick. We chose partitioning and retry semantics so ordering matched what finance and operations expected when they reconciled.
Java & Spring Boot banking services
The implementations were written in Java (aligned with the bank’s LTS line) using Spring Boot for HTTP APIs, validation, and integration with security and data layers. Transactional boundaries were kept explicit—when a transfer or fee post failed halfway, the state rolled back in a way auditors and support could reason about.
A large part of the lead role was working with customer and product stakeholders: turning “how the branch handles this exception” into DTOs, error codes, and idempotent endpoints so mobile and partner channels did not each invent their own rules. The same patterns applied to account lifecycle changes so dormant or restricted accounts stayed consistent across services.
Containers, GitOps & zero-downtime
Every service shipped as a container and ran on the bank’s Kubernetes / OpenShift footprint with Helm charts versioned next to the code. Rolling deployments and readiness checks were set up so we could promote builds through Jenkins and Bitbucket pipelines without taking a hard outage during business hours.
GitOps-style discipline meant environment drift showed up as a diff instead of a surprise: what ran in production matched what reviewers had signed off in the chart and config repo.
Observability & incident readiness
Production signals were wired through Datadog: dashboards per service family, distributed tracing across the synchronous call path, and alerts on error rate and latency jumps during settlement windows.
When incidents happened, traces plus structured logs shortened the loop from “customer X saw a timeout” to “this dependency or partition skew.” As lead, I ran or joined post-incident reviews so we fixed the class of issue—not only the single ticket.
Quality, AI-assisted delivery & mentoring
Regression safety relied on JUnit 5 and Mockito around service and repository layers, with contract tests on the noisiest integrations. Pull requests stayed small enough that reviewers could actually see behavioral change, not only line noise.
Day to day we used Claude, Cursor, and Copilot for draft design notes, test scaffolding, and refactors—always with human review before anything touched customer money paths. Agile ceremonies stayed honest about risk, and I spent time mentoring engineers on Spring patterns and production ownership so the team did not depend on one person for every release decision.
Wipro · Project-0
Pensions and annuities for a leading insurer
The programme served a regulated life and pensions book: schemes, members, and the annuity and drawdown journeys that sit behind adviser and member-facing channels. As a senior software engineer, the day-to-day was Java and Spring Boot services—turning domain rules into stable REST APIs, batch extracts, and integrations that had to stay correct when regulators or product teams changed the goalposts.
Pensions & annuity integrations
Domain-heavy endpoints were implemented for pensions and annuity workflows: quotes and illustrations, accumulation updates, payout schedules, and the hand-offs into policy-admin and finance systems. We kept contracts explicit—versioned REST APIs, predictable error shapes, and idempotent batch steps where the same extract could be re-run after a failure without duplicating money movement.
Third-party and legacy adapters were wrapped behind narrow interfaces so the core service did not inherit every quirk of a mainframe or partner feed. That made it easier for India and UK squads to parallelise work while still sharing the same domain vocabulary in code reviews.
Spring Cloud on AWS
Microservice-style boundaries were implemented with Spring Boot and Spring Cloud patterns—configuration, discovery, and resilience around calls that often crossed VPC boundaries to AWS-hosted dependencies. We treated timeouts, retries, and circuit breaking as part of the contract because annuity calculations and external pricing calls could degrade without warning.
Operational hygiene mattered as much as code: environment parity between regions, sensible logging defaults, and sizing discussions with platform teams so peak batch windows did not starve interactive traffic.
Security, OAuth2 & data integrity
Member and scheme data moved through OAuth2-protected APIs using Spring Security—token validation, least-privilege scopes, and clear separation between what a channel application could read versus what only operations could unlock.
Data integrity was enforced at the database layer where it mattered most: constraints, audit columns, and migration discipline so production changes did not silently widen what a field could hold. Encryption in transit was non-negotiable for anything carrying tax identifiers, bank details, or payout instructions.
TDD culture & mentoring
Regression safety leaned on JUnit and Mockito around services and repositories, with heavier tests on the nastiest integration seams. Pull requests were expected to show behavior change, not only refactors, and we used pairing when someone was touching a sensitive payout path for the first time.
Part of the role was mentoring engineers across India and the UK: onboarding to the pensions vocabulary, walking through Spring idioms, and reinforcing a shared definition of “done” that included production checks—not only story acceptance.
TCS · Project-0
General insurance delivery for a leading carrier
The account centred on general insurance—motor, property, and related personal lines—where policy lifecycle, renewals, and claims touch the same operational data. As a developer, the work was mostly Java and Spring Boot services: REST endpoints for channel and internal consumers, batch jobs for extracts and rating refreshes, and careful SQL so high-volume reads did not destabilise shared schemas.
General insurance platform services
Policy and renewal flows were implemented as small Spring components with clear service boundaries—controllers stayed thin, validation lived next to DTOs, and persistence went through repositories so we could refactor storage without rewriting every caller.
Claims-adjacent features (intake hand-offs, status lookups, document references) were exposed through the same REST style the wider programme standardised on: predictable status codes, explicit error bodies, and pagination where lists could grow without blowing client memory.
Testing & SQL foundations
JUnit and Mockito were used the way the team expected: fast unit tests around services, mocked integrations for partners we could not hit from CI, and a bias toward writing a failing test before fixing a regression so the same bug did not reappear under a different story id.
SQL tuning mattered because many screens and batches hit the same few tables. We traced slow queries with the DBA, added indexes where access patterns justified them, and avoided “clever” ORM mappings that generated accidental cartesian products during peak renewal windows.
Agile delivery & incidents
Delivery ran through Agile release trains with Scrum ceremonies—refinement, sprint planning, demos—so scope was negotiated openly with BAs and QA rather than inferred from ticket titles alone.
When production misbehaved, I joined RCA discussions with support and ops: pull logs, reproduce against a copy of data where safe, and propose a fix plus a test or guardrail so the incident class did not repeat quietly after the weekend.