Intermediate Software Engineer - Artificial Intelligence (AI)

Added
1 hour ago
Type
Full time
Salary
Upgrade to Premium to se...

Related skills

golang postgresql python kubernetes llms

๐Ÿ“‹ Description

  • Design AI features for the domain services platform using Python and Go.
  • Integrate and fine-tune open-source models (e.g., LLaMA 3.2) via Ollama.
  • Research and implement emerging AI tech for smarter products.
  • Collaborate with teams to rapidly prototype ML/LLM features.
  • Contribute to scalable, high-performance AI stack with ethical usage.
  • Engage in open-source AI ecosystem and share tools with the team.

๐ŸŽฏ Requirements

  • Bachelor's degree in Software Engineering, Computer Science, or related field.
  • 3+ years of professional software engineering experience in production environments.
  • Strong proficiency in Python and Golang.
  • Solid foundation in software design principles, patterns, and SOA.
  • Experience contributing to scalable systems and component-level architecture.
  • Ability to design and build RESTful APIs for model serving and AI-enabled workflows.
  • Working knowledge of relational/SQL databases (preferably PostgreSQL) and data modeling for AI use cases.
  • Strong understanding of modern LLM concepts and transformer architectures.
  • Hands-on experience adapting and deploying open-source models (LLaMA, Mistral, Mixtral) using Ollama or Hugging Face Transformers.
  • Experience with fine-tuning techniques (LoRA, QLoRA, PEFT) for domain adaptation.
  • Proficiency in prompt engineering (few-shot, chain-of-thought, structured outputs).
  • Familiarity with model serving patterns for scalable inference.
  • Experience designing and implementing Retrieval-Augmented Generation (RAG) pipelines end-to-end.
  • Hands-on experience with vector databases (pgvector, Pinecone, Weaviate).
  • Familiarity with embedding models, chunking strategies, and semantic search patterns.
  • Understanding of data pipelines for ingestion, transformation, and inference result storage.
  • Familiarity with Model Context Protocol (MCP) server design patterns.
  • Experience with agent orchestration frameworks (LangChain, LangGraph).
  • Understanding of tool use, function calling, and multi-step reasoning in LLM workflows.
  • Experience with LLM evaluation frameworks (RAGAS, promptfoo, or custom pipelines).
  • Familiarity with observability and tracing tools (LangSmith, Helicone).
  • Comfort with structured logging, metrics, and alerting for AI workloads.
  • Experience with containerization and cloud-native deployment (AWS).
  • Familiarity with Kubernetes or EKS for scaling model-serving workloads.
  • Understanding of GPU considerations for inference (quantization, batching, memory trade-offs).
  • Active interest in the open-source AI ecosystem.
  • Strong collaboration and communication skills across technical and business teams.
  • Enthusiasm for emerging AI technologies with a delivery-focused mindset.

๐ŸŽ Benefits

  • Remote-first culture; work from anywhere with Internet.
  • Global team across 20+ countries.
  • Commitment to diversity and inclusion.
  • Reasonable accommodations for applicants with disabilities.
Share job

Meet JobCopilot: Your Personal AI Job Hunter

Automatically Apply to Engineering Jobs. Just set your preferences and Job Copilot will do the rest โ€” finding, filtering, and applying while you focus on what matters.

Related Engineering Jobs

See more Engineering jobs โ†’