Data Analyst / Data Engineer
Business Class
25 septembrie 2025
Chișinău, Сentru
De la 2 ani
Full-time
Superioare
În locația angajatorului
At Business-Class, we build technology that powers premium travel services. To scale effectively, we need reliable data systems and strong engineering practices. We are looking for a Senior Data Engineer to take ownership of our data infrastructure, ensuring high-quality, secure, and production-ready data for analytics, product development, and future AI/ML initiatives.
The Role:
As Senior Data Engineer, you will design, build, and maintain scalable data pipelines and data models on AWS. You will be responsible for the full data lifecycle: ingestion, transformation, modeling, and delivery. You will work closely with analysts, engineers, and product managers to ensure that data supports decision-making and product innovation.
Responsibilities:
Design, build, and manage production-grade ETL/ELT pipelines using Python and AWS services.
Ingest and process data from multiple sources (APIs, SFTP, RDBMS, third-party integrations).
Develop and maintain optimized data models for analytics and machine learning.
Ensure high data quality, security, and compliance with best practices.
Monitor and troubleshoot pipeline performance to ensure reliability and scalability.
Optimize SQL queries, indexes, and schema designs for high-performance workloads.
Work with stakeholders to translate business needs into technical data solutions.
Evaluate and implement AI/ML solutions where they add measurable business value.
Must-Have Requirements:
5+ years of professional experience in Data Engineering with production systems.
Advanced SQL expertise: query optimization, indexing, schema design in PostgreSQL/MySQL.
Strong Python skills with data-focused libraries (pandas, pydantic, boto3, awswrangler, requests, PySpark).
Hands-on experience with AWS data services (Glue, Athena, Lambda, S3).
Solid experience designing and maintaining ETL/ELT pipelines at scale.
Strong knowledge of data modeling (Star Schema, Data Vault) and formats (Parquet, Apache Iceberg, CSV).
Proven ability to manage large-scale datasets (tens of millions of rows, or 100GB+ daily ingestion).
Practical experience with at least one NoSQL database (e.g., MongoDB, DynamoDB).
Understanding of data governance, quality assurance, and security best practices.
Nice-to-Have:
Experience with Airflow or AWS Managed Workflows.
Familiarity with Salesforce and SOQL.
Travel industry experience.
Experience deploying ML models into production environments (MLOps).
Knowledge of Apache Hudi, AWS Quicksight, or similar BI tools.
API development experience (FastAPI, OpenAPI).
Relevant certifications (AWS, Databricks, Snowflake).
What We Offer:
Competitive salary, aligned with experience and market standards.
Official employment (IT Park).
Hybrid work model.
Access to professional learning resources (Udemy, O’Reilly, and others).
The Role:
As Senior Data Engineer, you will design, build, and maintain scalable data pipelines and data models on AWS. You will be responsible for the full data lifecycle: ingestion, transformation, modeling, and delivery. You will work closely with analysts, engineers, and product managers to ensure that data supports decision-making and product innovation.
Responsibilities:
Design, build, and manage production-grade ETL/ELT pipelines using Python and AWS services.
Ingest and process data from multiple sources (APIs, SFTP, RDBMS, third-party integrations).
Develop and maintain optimized data models for analytics and machine learning.
Ensure high data quality, security, and compliance with best practices.
Monitor and troubleshoot pipeline performance to ensure reliability and scalability.
Optimize SQL queries, indexes, and schema designs for high-performance workloads.
Work with stakeholders to translate business needs into technical data solutions.
Evaluate and implement AI/ML solutions where they add measurable business value.
Must-Have Requirements:
5+ years of professional experience in Data Engineering with production systems.
Advanced SQL expertise: query optimization, indexing, schema design in PostgreSQL/MySQL.
Strong Python skills with data-focused libraries (pandas, pydantic, boto3, awswrangler, requests, PySpark).
Hands-on experience with AWS data services (Glue, Athena, Lambda, S3).
Solid experience designing and maintaining ETL/ELT pipelines at scale.
Strong knowledge of data modeling (Star Schema, Data Vault) and formats (Parquet, Apache Iceberg, CSV).
Proven ability to manage large-scale datasets (tens of millions of rows, or 100GB+ daily ingestion).
Practical experience with at least one NoSQL database (e.g., MongoDB, DynamoDB).
Understanding of data governance, quality assurance, and security best practices.
Nice-to-Have:
Experience with Airflow or AWS Managed Workflows.
Familiarity with Salesforce and SOQL.
Travel industry experience.
Experience deploying ML models into production environments (MLOps).
Knowledge of Apache Hudi, AWS Quicksight, or similar BI tools.
API development experience (FastAPI, OpenAPI).
Relevant certifications (AWS, Databricks, Snowflake).
What We Offer:
Competitive salary, aligned with experience and market standards.
Official employment (IT Park).
Hybrid work model.
Access to professional learning resources (Udemy, O’Reilly, and others).
Adresa:
Chișinău, Сentru
str. Mihai Viteazul, 15А
str. Mihai Viteazul, 15А
Data actualizării:
25 septembrie 2025
Aplicat!
Candidații înregistrați pe site primesc mai des răspunsuri de la angajatori și pot comunica direct cu ei prin CHAT.