Data Engineer
Full Time
Stockholm, Sweden

ELECTROLUX Group IT Data Science presents

Bring your innovative ideas around Big Data

For us going to work everyday has an even greater purpose than putting the latest product or technology on the market. It’s about improving the everyday lives of millions. By staying humble and open for new ideas – we can push the boundaries for cooking, cleaning and wellbeing at home. But to keep doing so, we need more people who want to innovate and re-imagine what life at home can be.

Data Engineer – Stockholm HQ

As an enterprise producing millions of appliances each year and counting more than 50.000 employees, Electrolux is already generating massive amounts of data in hundreds of systems across the globe from sales to sensors in factory robots or smart home appliances. And with an explosion of IoT devices just around the corner, these data volumes are expected to grow in the near future by an order of magnitude or more.

The engineering challenges involved in storing, processing and analyzing data at this scale, velocity and complexity are simply too much for traditional enterprise solutions, so our small team of Data Engineers are currently building a scalable data platform in the cloud to address these challenges and help transform Electrolux into a fundamentally data-driven company.

To achieve this, we are looking at experienced data engineers working as part of the Electrolux Data Science team, a key enabler of the company’s digital agenda and the Groups Center of Excellence with regards to Big Data.


You will work alongside both Data Engineers and Data Scientists, your responsibilities include platform work (solution design and implementation of stream processing, ETL and pipeline automation toolchains) as well as concrete pipelining tasks (configuring data flows from source systems to the data lake and/or various lakeshore access layer stacks).
• Proactive. You are self-driven, results-oriented with a positive outlook. You’re not just solving the tasks, you always think ahead and identifying operational issues and drive resolution
• Results oriented. You are strong customer focused and have ability to develop and sustain a network with cross-functional teams. 
• A smart risk taker. You know when to, and when not to challenge conventions.
• Pragmatic. Your solutions are always realistic and doable.
• Flexible. You react quickly and positively to changing environment.

• Minimum two years of hands on experience in AWS or Azure
• Solid command of one or more of: Java, Scala, Python
• Good understanding of data structures and database technologies
• Comfortable in Linux/bash
• Experience with good software practices: Version control (git), CI/CD, writing maintainable code, KISS, DRY, SOLID, etc.
• Self-motivated, fast learner, technology agnostic, pragmatic
• B.Sc. in Computer Science (or equivalent)
• Experience in data engineering (non-drag and drop ETL, data wrangling, data quality, warehousing, etc.)
• Experience with open source/open standards big data technologies, e.g.: Spark, Hive, HBase, Cassandra, Drill, Databricks, EMR/HDInsight, etc.
• Experience with streaming data technology and uses: Kafka/Kinesis, Confluent Platform, Flink, Samza, Spark Streaming, Druid, ElasticSearch, etc.
• Experience in cloud solution design
• Experience from both startup (Lean/Agile) and enterprise (more waterfall-like) environments
• Excellent English knowledge, written and spoken.

Contact person: Chiara Salin

Minimum Qualification