Ukraine Flag We stand with Ukraine! Learn More

Data Engineer with Python/Spark skills

Ukraine

🌎 Akvelon is a known USA company, with offices in places like Seattle, Mexico, Ukraine, Poland, and Serbia. Our company is an official vendor of Microsoft and Google. Our clients also include Amazon, Evernote, Intel, HP, Reddit, Pinterest, AT&T, T-Mobile, Starbucks, and LinkedIn. To work with Akvelon means to be connected with the best and brightest engineering teams from around the globe and working with an actual technology stack building Enterprise, CRM, LOB, Cloud, AI and Machine Learning, Cross-Platform, Mobile, and other types of applications customized to client’s needs and processes.

We are looking for a Data Engineer with Python/Spark skills to join the Data Platform Team on a 3-month contract basis.

About the Project

The project is a leading provider of innovative software solutions for terminal operating systems and logistics management. Its products help ports, terminals, and intermodal facilities optimize cargo movement, improve operational efficiency, and streamline supply chain processes. The project offers data-driven solutions for real-time container tracking, yard management, vessel and rail planning, and automated workflows, enabling businesses to handle increasing cargo volumes with greater accuracy and speed.

Responsibilities:

  • Develop, maintain, and optimize ETL pipelines using Python and Apache Spark.
  • Implement data transformations, cleansing, and enrichment processes to support business needs.
  • Work with large-scale distributed data processing using Apache Spark.
  • Design and optimize SQL databases, ensuring efficient data modeling and querying.
  • Work with Kubernetes, ensuring smooth deployment and management of containerized applications.
  • Analyze and document data mapping and data journey workflows.
  • Work with messaging architectures and platforms, understanding their advantages, limitations, and best use cases.
  • Collaborate in an Agile Scrum environment, following industry-standard SDLC practices.

Requirements:

  • Strong experience in Python for production environments.
  • Proven hands-on experience with Apache Spark (batch and streaming processing).
  • Solid understanding of ETL processes, data transformation, and pipeline optimization.
  • Experience working with large-scale distributed data systems.
  • Proficiency with Kubernetes and familiarity with kubectl.
  • Expertise in SQL, data modeling, and database design.
  • Understanding of messaging architectures and their trade-offs.
  • Experience with data mapping and documentation methods.
  • Previous experience working in an Agile Scrum environment.

Ready to take the next step? Apply now! 🚀

Yaroslava Rizhko

Recruiting Team Lead

Apply

or Refer a friend

Get up to $1,5K for
a referrence

Long-term partnership approach

Refer a friend