Skip to content

Data Engineer

  • On-site, Remote, Hybrid
    • Kraków, Małopolskie, Poland
  • Cloud & Data Engineering

Join our team as a Data Engineer and deliver cost-effective data migrations while implementing best practices in cutting-edge data architectures!

Job description

We are #VLteam – tech enthusiasts constantly striving for growth. The team is our foundation, that’s why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of ​​“measuring outcomes, not hours”. Join us & see for yourself!

Job requirements

About the role

You will participate in defining the requirements and architecture for the new platform, implement the solution, and remain involved in its operations and maintenance post-launch. Your work will introduce data governance and management, laying the foundation for accurate and comprehensive reporting that was previously impossible.
You will adhere to and actively promote engineering best practices, data governance standards, and the use of open standards. Build data ingestion & processing pipelines. Collaborate with stakeholders to define requirements, develop data pipelines and data quality metrics.


Required skills:

  • Python (Advanced)

  • Databricks/Snowflake (Advanced)

  • data engineering (Advanced)

  • Strong communication skills are essential to collaborate with stakeholders (Advanced)

  • SQL (Regular)

  • Azure/AWS/GCP (Regular)

  • Airflow or other orchestration tool (Regular)

  • dbt (nice to have)

  • data modelling (nice to have)

  • LLM productivity tools like Cursor/Claude Code (nice to have)

  • Knowledge of the insurance domain will be considered a strong advantage (nice to have)

What we expect in general: 

  • Hands-on experience with Python

  • Proven experience with data warehouse solutions (e.g., BigQuery, Redshift, Snowflake)

  • Experience with Databricks or data lakehouse platforms

  • Strong background in data modelling, data catalogue concepts, data formats, and data pipelines/ETL design, implementation and maintenance

  • Ability to thrive in an Agile environment, collaborating with team members to solve complex problems with transparency

  • Experience with AWS/GCP/Azure cloud services, including: GCS/S3/ABS, EMR/Dataproc, MWAA/Composer or Microsoft Fabric, ADF/AWS Glue

  • Experience in ecosystems requiring improvements and the drive to implement best practices as a long-term process

  • Experience with Infrastructure as Code practices, particularly Terraform, is an advantage

  • Proactive approach

  • Familiarity with Spark is a plus

  • Familiarity with Streaming tools is a plus

Full job description  - Data Engineer (Senior) - Careers VirtusLab


Don’t worry if you don’t meet all the requirements. What matters most is your passion and willingness to develop. Moreover, B2B does not have to be the only form of cooperation. Apply and find out!

or