Ref: #63661
Data Platform Engineer
Job Title: Data Platform Engineer
Location: New York City (Hybrid)
About Us:
A fast-growing NYC startup at the intersection of artificial intelligence and workforce management. Their mission is to transform how companies manage their workforce through innovative AI-powered solutions that enhance productivity, optimize operations, and deliver actionable insights. Join a dynamic team of engineers, data scientists, and innovators who are passionate about shaping the future of work.
Role Overview:
As a Data Platform Engineer, you will be responsible for building, maintaining, and scaling our data infrastructure to support the company’s AI-driven workforce solutions. This role requires a deep understanding of data pipelines, infrastructure as code (IaC), and cloud-native technologies. You’ll work closely with cross-functional teams to ensure our data platform is scalable, reliable, and optimized for performance.
Key Responsibilities:
- Design, build, and maintain scalable data infrastructure that supports our AI models and analytics.
- Develop, monitor, and optimize data pipelines and workflows using SQL, Python, and Bash.
- Manage Kubernetes clusters to ensure efficient deployment and scaling of applications.
- Automate infrastructure provisioning using Terraform to enable rapid scaling and consistency.
- Leverage monitoring and alerting tools such as Prometheus and Grafana to ensure system health and performance.
- Implement continuous delivery with tools like ArgoCD to streamline deployment processes.
- Collaborate with data scientists, engineers, and other stakeholders to understand platform needs and optimize workflows.
- Troubleshoot and resolve issues across the stack (infrastructure, pipelines, and databases).
Skills & Qualifications:
- Proficiency in SQL for querying and managing large datasets.
- Strong experience with Kubernetes for container orchestration, including configuration and scaling.
- Scripting skills in Python and Bash for automation of processes and pipeline optimization.
- Experience with Infrastructure as Code (IaC) using Terraform to manage cloud environments.
- Familiarity with ArgoCD for GitOps-based continuous deployment.
- Knowledge of Prometheus and Grafana for monitoring and visualizing system metrics.
- Understanding of cloud platforms such as AWS, GCP, or Azure.
- Strong problem-solving skills and the ability to work in a fast-paced startup environment.
Nice to Have:
- Experience with big data technologies like Apache Kafka, Spark, or Flink.
- Familiarity with distributed data storage solutions (e.g., Cassandra, Elasticsearch).
- Knowledge of machine learning concepts and workflows.
If you’re passionate about building robust data platforms and excited about the possibilities of AI in the workforce, we’d love to hear from you!