Skip to content

Data Engineer

  • On-site, Remote, Hybrid
    • Kraków, Małopolskie, Poland

Join our team as a Data Engineer and deliver cost-effective data migrations while implementing best practices in cutting-edge data architectures!

Job description

We are #VLteam - tech enthusiasts constantly striving for growth. The team is our foundation, that's why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of ​​“measuring outcomes, not hours”. Join us & see for yourself! We are looking for people who want to tackle Data Modelling and Transformation challenges using a modern technology stack.

Job requirements

Data Engineer - Senior

About the role

-Design and implement data monitoring pipelines to proactively identify and resolve data quality issues, potentially impacting downstream products.

-Build data ingestion & processing pipelines supporting our customer-facing Data Products

-Collaborate with stakeholders to define requirements, develop metrics for data pipeline quality, negotiate data quality SLAs on behalf of downstream data product owners, and create monitoring solutions using Python, Spark, and Airflow.

-Refine our processes responsible for maintaining data quality and data ingestion to run in a cost/compute-efficient and best practice manner

-Innovate and develop new methodologies to enhance access to trustworthy data, accelerating the value provided by the product data team.


Required skills:

  • Python (advanced)

  • Apache Airflow (advanced)

  • SQL and business analytic skills (advanced)

  • Apache Spark (regular)

  • AWS/GCP (regular)

  • DevOps (regular)

  • Trino (basic)

  • Apache Iceberg (basic)

  • Starburst (nice to have)

  • Databricks (nice to have)

  • Terraform (nice to have)

What we expect in general: 

  • Hands-on experience with Python (4+ years as a Data Engineer)

  • Proven experience with data warehouse solutions (e.g., BigQuery, Snowflake)

  • Strong background in data modeling, data catalog concepts, data formats, and data pipelines/ETL design, implementation and maintenance

  • Ability to thrive in an Agile environment, collaborating with team members to solve complex problems with transparency

  • Proficient with AWS/GCP cloud services, including: GCS/S3, EMR/Dataproc, MWAA/Composer

  • Experience in ecosystems requiring improvements and the drive to implement best practices as long term process

  • Experience with data migration from data warehouse solutions (e.g., BigQuery, Snowflake) to cost-effective alternatives is an advantage

  • Familiarity in Iceberg Lakehouse architecture using Trino is a plus

  • Familiarity with Starburst is a plus

  • Experience with Infrastructure as Code practices, particularly Terraform is an advantage

Full job description  - Data Engineer (Senior) - Careers VirtusLab


Don’t worry if you don’t meet all the requirements. What matters most is your passion and willingness to develop. Moreover, B2B does not have to be the only form of cooperation. Apply and find out!


or