Nulogy is a rare company. We value autonomy, mastery, and purpose for our employees and a highly transparent and honest culture. This has helped us be recognized repeatedly as one of AON Hewitt’s Best Employers in Canada at their Platinum level — an award decided by anonymous employee surveys, not marketing spin. Our motto is: It’s not just business, it’s personal. After 14 years working together, the company’s founders from the University of Waterloo are still all at Nulogy — and great friends!
Nulogy also has a solid business model, generating real revenue from huge concrete benefits provided to customers around the world serving the supply chains of household names like L’Oreal, Sony, Procter & Gamble, and Pfizer. We are still in the early stages of bold and transformative strategies that stretch far beyond where our business is today. For our rapid growth, Nulogy has won the Deloitte Fast 50 award where we were identified as one of the 200 fastest growing companies in North America.
By joining Nulogy, you will work in intelligent, high performance teams on very challenging problems that make real global impact.
Prepare and maintain architectures, data, and data models for data science applications.
As a Data Engineer at Nulogy, you will join an ambitious Data Science team that is transforming the way contract packagers do business. You will have a hand in shaping the data strategy of the organization. You will work extensively with Data Scientists, Product Development and other Nulogy stakeholders architecting solutions for highly technical problems, building and owning a data pipeline for data science consumption, and bringing new data products to life.
- Take ownership of the ETL process and data pipeline for data science consumption
- Support data scientists and other Nulogy stakeholders by making sure that the data is accessible in a useful format
- Lead the process of moving data science projects from “sandbox” to in app production
- Work closely with data scientists and product development in implementing new algorithms to extract data from internal and external sources to develop insights around Nulogy’s supply chain landscape
- Develop and maintain tools to monitor the data pipeline and the performance of data and Machine Learning models after deployment
- Work closely with Engineering in analyzing/reducing performance bottlenecks, and looking for new ways to improve data ingestion
- Work closely with Infrastructure and Engineering to ensure data retention and data security policies
- Benchmark big data tools/technologies and provide recommendations on which tools to use to scale data science efforts at Nulogy
- Support Product Development in translating messy real-world problems into scalable software
- University Degree in Computer Science, Math, Engineering or equivalent experience
- 2+ years of experience working with SQL, JSON, XML
- Well versed in entity resolution (deduplication, record linkage, canonicalization)
- 2 + years of experience with Big Data tools such as: Apache Spark/Hadoop in a production environment
- 2+ years of experience with relational databases and Big Data/NoSQL data stores
- An understanding of Machine Learning fundamentals (supervised and unsupervised learning)
NICE TO HAVE
- Experience in infrastructure platforms (AWS, Docker)
- Software development experience (Domain modelling, TDD, pair programming)
- Experience in building APIs and development in a multi-tenant SaaS application
- You are excited about the prospect of joining a data team in its infancy, and contributing to its long term strategy and day-to-day operations
- You are a full stack data person, comfortable working with messy data regardless of how or where it is stored
- You have great judgement, capable of making tough decisions and often under uncertainty
- You value hard work and collaboration
Nulogy embraces diversity, and we recognize the need for teams that represent a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better our work will be. We encourage everyone to apply.