careers

×

job category research study

We want to make job hunting easier by updating the structure of job titles and categories to improve upon the job search experience. Please participate in a brief drag-and-drop activity to let us know how you recognize and relate to general job classifications.

Return to Job Search

Data Engineer (Remote)

Job Description

 

Data Engineer is part of a team focused on data streaming, data analytic and data science platforms. This role will be working with Data Scientists, Data Engineers, Data Analysts and Architects in a cross-functional team to deliver data products for the business.

 

Job Duties/Responsibilities

  • Develop, debug and support applications and data pipelines for big data analytics platforms in Azure Databricks, Kafka, Neo4j
  • Develop, debug and support streaming data in Kafka and Databricks environments
  • Work with multidisciplined team to create datasets for Business Intelligence
  • Validate data and debug data issues
  • Work with Data Science team to create implement machine learning models
  • Discuss requirements or knowledge/facts effectively with technical and non-technical members of the project team

 

Skill Requirements

  • 3+ years’ experience designing, building, testing, and maintaining Python/Java/Scala based applications.
  • 2+ years’ experience with Data Lake/Databricks/Spark in either an administrative, development, or support role is required.
  • Hands-on experience with Azure cloud computing infrastructure (Azure Databricks, Azure Data Factory, Spark)
  • 2+ years’ experience with Big Data Analytics Platforms
  • Demonstrated analytical and problem-solving skills, particularly those that apply to a “Distributed Big Data Computing” environment.
  • Experience in applying Data Science / ML in production to build data-driven products for solving business problems.
  • Strong coding skills in general purpose languages like Java, Python, Apache Spark, SQL and familiarity with software engineering principles around testing, code reviews and deployment.
  • Experience with Databricks, Kafka, Neo4j, H3 is highly valued.
  • Proficient in data analysis and visualization using Python and application PowerBI.
  • Experience with distributed data processing systems like Spark and proficiency in SQL.
  • Must be able to quickly understand technical and business requirements and be able to translate into technical implementation

 

Education Requirements

 

  • BS/MS/PhD in Computer Science, Data Science, Statistics, Mathematics, Engineering, Bioinformatics, or Physics

U-Haul Offers: 

Full Medical Coverage  

Prescription plans  

Dental & Vision Plans  

Registered Dietitian Program   

Gym Reimbursement Program  

Weight Watchers   

Virtual Doctors’ Visits  

Career stability  

Opportunities for advancement  

Valuable on-the-job training  

Tuition reimbursement program  

Free online courses for personal and professional development at U-Haul University®  

Business travel insurance  

You Matter Employee Assistance Program  

Paid holidays, vacation, and sick days   

Employee Stock Ownership Plan (ESOP)  

401(k) Savings Plan  

Life insurance  

Critical Illness/Group Accident  

24-hour physician available for kids  

MetLaw Legal program  

MetLife auto and home insurance  

Mindset App Program  

Discounts on cell phone plans, hotels, and more  

LifeLock Identity Theft  

Savvy consumer wellness programs - from health care tips to financial wellness  

Dave Ramsey’s SmartDollar Program  

U-Haul Federal Credit Union 



U-Haul is an equal opportunity employer. All applications for employment will be considered without regard to race, color, religion, sex, national origin, physical or mental disability, veteran status, or any other basis protected by applicable federal, provincial, state, or local law. Individual accommodations are available on requests for applicants taking part in all aspects of the selection process. Information obtained during this process will only be shared on a need to know basis.

Apply