Key Responsibilities
- Develop, extend, and optimize solutions on Azure Databricks using Python, SQL, and PySpark.
- Build and maintain software against delta lakes and relational databases.
- Perform full-cycle engineering: analysis, design, implementation, testing (with emphasis on automated testing).
- Optimize Spark performance for large datasets (tens of billions of rows, 200+ columns).
- Collaborate within an agile team across multiple countries.
- Ensure reliability and scalability of sub-ledger and related data systems.
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or similar.
- Proven software engineering background (complex systems, not just scripting).
- Strong Python development experience (software engineering level).
- Hands-on PySpark experience.
- Strong SQL and relational data modeling expertise.
- Solid understanding of software engineering best practices (testing, version control, agile).
- Excellent problem-solving and analytical skills.
- Highly proficient in English (written and verbal).