Logo
  • en
  • ru
  • Logo
    Rising your human capital
    These cookies allow our websites to remember information that changes the way the site behaves or looks, such as your preferred language or the region you are in. For instance, by remembering your region, a website may be able to provide you with local weather reports or traffic news. These cookies can also assist you in changing text size, font and other parts of web pages that you can personalize.
    Arrow Back
    15 May 2019

    Senior Data Engineer

    Location Remote,
    ID: Job-2466

    Project Description:
    The company provides hotel owners with meaningful ideas, allowing them to create incredible impressions of the guests and increase direct revenues. Our products provide the full range of personalization that modern travelers expect.

    Key words: Java, Scala, Python, ETL, Spark, PostgreSQL, Oracle, Docker, Kubernetes,  jUnit, Mockito, PowerMock, Git

    Duties:

    • Interpret and analyze data from various source systems to support data integration
    • and ingestion
    • Identify, analyze, and interpret trends or patterns in complex data sets
    • Troubleshoot and determine best resolution for data issues and anomalies
    • Ensure data flow/processing functions correctly and all related components are
    • updated
    • Manage exploratory data analysis to support database and dashboard development,
    • as well as advanced analytics efforts
    • Design and develop methods, processes, and systems to consolidate and analyze
    • structured and unstructured data
    • Design, implement, and maintain standard data interfaces for data ingest including Extract/Transform/Load (ETL/ELT) methodology and implementation, APIs, RESTful Web Services, and data cleansing
    • Develop and use advanced software programs, algorithms, query techniques, model
    • complex business problems, and automated processes to cleanse, integrate, and
    • evaluate datasets.
    • Ability to maintain, support and debug complex Apache Spark based implementations
    • Develop data driven solutions using modern/current and future/next generation
    • technologies to meet evolving company’s business needs
    • Maintain and update AWS data solutions using technology such as Elasticsearch
    • Analyze the requirements and evaluate technologies for data science capabilities
    • including one or more of the following: Natural Language Processing, Machine
    • Learning, predictive modeling, statistical analysis, and hypothesis testing.
    • Proactively communicate, written and verbal, with internal stakeholders, customers,
    • and partners
    • Create software technical documentation, when needed

    You will do:

    • Ability to transform abstract ideas into a working proof of concept to delivery on
    • Production
    • Clearly communicate capabilities, opportunities, and recommendations to both
    • technical and nontechnical audiences
    • Experience in understanding the source data from various systems and platforms for
    • data integration
    • Has deep understanding of data architecture & data modeling best practices and
    • guidelines for different data and analytic platforms
    • High-energy, decisive, ability to motivate
    • Ability meet product launch schedules
    • Positive entrepreneurial personality
    • Exceptional relationship building skills
    • Excellent written and oral communication skills
    • Strong analytical capabilities
    • Advanced problem-solving skills
    • Ability to influence decision-making and change
    • Ability to display good judgment
    • Self starter and self motivated
    • Comfortable working with customers and internal teams in different time zones

    Educational Requirements and Experience:

    • Bachelor’s degree in Computer Science
    • Minimum of 5+ years experience as a data engineer in a fast paced environment
    • Minimum of 7 years experience in Software Development Life Cycle
    • 2-3 years of combined experience in Python and Java is a plus
    • Experience in building scalable and reliable ETL
    • Significant experience with Relational Databases (Apache Spark, PostgreSQL, Oracle) and Column Oriented Databases (Redshift, MySQL)
    • Knowledgeable in NoSQL, Kafka
    • Expert in one of the following scripting languages: Python, Java, Kotlin, Scala
    • Expert in Shell/Bash scripting
    • Familiar with Machine Learning principles
    • Deep knowledge of data mining techniques
    • Experience with solving distributed storage and distributed in memory processing
    • problems in large data sets
    • Experience working with unstructured data and NLP
    • Strong knowledge with UNIX/Linux environment
    • Experience with Docker and Kubernetes containers
    • Excellent knowledge of cloud computing AWS based solutions (S3, RDS, Lambda)
    • Software delivery experience in a DevOps/Cloud based environment
    • Knowledge of version control systems (Git, Gitlab)
    • Knowledge in writing unit testing (jUnit, Mockito, PowerMock)

    Send your CV





    Apply with linkedin

    Data successfully sent
    Data not sent

    Have you been satisfied with search results?

    We can send you similar jobs by email.