This job is no longer available

The job listing you are looking has expired.
Please browse our latest remote jobs.

See open jobs →
← Back to all jobs

AI Integrations Backend Engineer

Added
7 days ago
Location
Type
Full time
Salary
Not Specified

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Save job

Wood Wide AI is a Carnegie Mellon spin-out pioneering a neuro-symbolic intelligence layer for enterprise numeric and tabular data. Our platform transforms raw numeric signals into rich embeddings for prediction, anomaly detection, and reasoning, enabling organizations to move beyond brittle ML pipelines toward adaptive, context-aware intelligence.

We’re early, fast-moving, and ambitious. Our founders combine deep research experience with hands-on product execution, and we’re building a lean, world-class team that thrives on curiosity, ownership, and speed.

About The Role

We’re hiring an AI Integrations Backend Engineer to bridge our core numeric intelligence engine with the rapidly evolving agentic ecosystem. You’ll design and maintain the interfaces that connect Wood Wide AI’s embeddings and reasoning engine to LLM agents, orchestration frameworks, and Model Context Protocol (MCP)-based tools. This role blends backend engineering, AI systems integration, and developer experience to make our product plug-and-play for AI builders and enterprises alike.

You’ll work closely with the founding team to turn cutting-edge research into robust, production-grade microservices and SDKs that power agentic workflows.

What You’ll Do

  • Design and build core backend APIs using Python (FastAPI) to serve numeric intelligence functions like embedding generation, reasoning calls, and model inference endpoints.

  • Integrate our engine with LLM agents and tool-calling interfaces (OpenAI, Anthropic, Gemini) to enable structured reasoning over numeric data.

  • Develop microservices and a Model Context Protocol (MCP) server, exposing modular “tools” that agents can securely invoke to process tabular or time-series data.

  • Orchestrate agentic workflows using frameworks such as Vercel AI SDK, LangGraph, PydanticAI, or custom planners, and evaluate trade-offs in performance and observability.

  • Build and maintain a Python SDK with clean abstractions and developer-first ergonomics.

  • Develop data connectors for major environments such as Databricks, Snowflake, Postgres, and S3/GCS.

  • Implement auth, rate limiting, usage metering, and structured logging for reliable production operations.

  • Containerize and deploy microservices via Docker, GitHub Actions, and GCP/AWS, ensuring scalability and maintainability.

  • Collaborate cross-functionally with ML and DX teammates to ensure seamless data flow and user experience.


What We’re Looking For

  • 1-3 years backend engineering experience (Python, FastAPI, Flask, or similar).

  • Proven experience designing API-first microservices and integrating with AI or ML systems.

  • Experienced in designing and managing scalable data pipelines and storage solutions using tools like Postgres, Redis, Kafka, and Airbyte.

  • Experience with Docker, GitHub Actions, and cloud providers (e.g. GCP, AWS).

  • Strong fundamentals in REST/gRPC design, authentication, and CI/CD.

  • Familiarity with LLM tool-calling APIs, agentic orchestration frameworks, and MCP-based architectures.

  • Experience with LangGraph, PydanticAI, MCP servers, or equivalent orchestration stacks.

  • Knowledge of vector databases (FAISS, pgvector, Pinecone, Weviate)

  • Experience using LLM APIs such as OpenAI and Anthropic’s.

  • Clear communication, strong documentation, and collaborative mindset.

Bonus Points:

  • Solid understanding of async programming, streaming APIs, and structured data handling.

  • Background in numerical or tabular data systems, or working with embeddings and ML inference pipelines.

Why Join Us

  • Join a high-caliber founding team defining a new layer of the AI stack.

  • Work on the next frontier of GenAI at the intersection of symbolic reasoning, numeric ML, and agentic intelligence.

  • Ship real systems that power the next generation of AI agents.

  • Flexible, high-ownership work environment with deep technical impact and visibility.

Interested?

Reach out! We value capability, curiosity, and drive above all.

Use AI to Automatically Apply!

Let your AI Job Copilot auto-fill application questions
Auto-apply to relevant jobs from 300,000 companies

Auto-apply with JobCopilot Apply manually instead
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to On site Engineering Jobs. Just set your preferences and Job Copilot will do the rest—finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs →