Data Engineer (mid-level, Sr and Lead)


Our client is a software artisan passionate about helping companies build awesome solutions. With an agile process that is built on top of the best engineering practices, their team is comprised of full-stack developers and architects who are well-versed in the latest technologies and love what they do! They believe that transparent, honest, and fluent communication, both remotely and on-site, is a key factor in the success of any project.


What we are looking for:

The ideal candidate is a seasoned Engineer oriented to a Data & Analitycs ecosystem (ADF, Databricks, ADLS, Synapse, Python, etc.), someone who also has wide experience in enterprise infrastructure, On-Premise and Cloud, and working in agile teams leveraging the tools that enable success such as CI/CD pipelines and following best practices. We are currently evaluating professionals with mixed experience and seniorities. We will be considering mid-level, senior and Lead candidates.



  • Develop ETL pipelines in Python and Azure Data Factory (Azure is preferred but we also evaluate exp. in AWS)
  • Software engineering and systems integration through REST APIs and other standard interfaces.
  • Work together with a team of professional engineers with the objective of automating processes, deploying and building infrastructure as code and managing the architecture of multicloud systems.
  • To Participate in agile ceremonies, weekly demos and such.
  • To Communicate your daily commitments.


Required Skills:

  • Upper-intermediate or advanced English (spoken/written)
  • 3+ years of relevant work experience.
  • Proficient with Python, Java or .NET C#.
  • Proficient with SQL.
  • Experience with Agile ceremonies.
  • Passionate about good engineering practices and testing.
  • Ability to organize, prioritize and communicate daily/weekly goals.
  • You like to learn, are curious, humble and like to get things done.
  • Passion for working in a customer facing setting.


Nice to have:

  • Proficient with ETL products (Spark, Databricks, Snowflake, Azure Data Factory, etc.)
  • Proficient with Azure Data Factory.
  • Proficient with Databricks/Snowflake and PySpark.
  • Proficient developing DevOps/CICD pipelines.
  • Proficient with Azure DevOps YAML Pipelines.
  • Proficient with Azure cloud services: ARM templates, API management, App Service, VMs, AKS, ACR, Gateways.
  • Terraform, Docker & Kubernetes.
  • Bash/Powershell.


Place of work: Remote from Argentina. For those living in Buenos Aires, it is expected that they can come to the office 1-2 times a month (Vicente López, Buenos Aires).
Contract type: Contractor. Full time (9 - 18hs Argentina).
Selection process:

  1. RRHH and Technical Questionnaire (60min)
  2. Technical Interview with Live Coding (1.30h)

Pago en USD


  • Argentine holidays
  • 2 weeks of vacation
  • Annual training
  • Performance bonus



Fecha Publicación: 19-01-2024





(54 9 11) 5324-4284