Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Big Data Engineer Hadoop/Spark.
Chennai Jobs | Expertini

Urgent! Big Data Engineer - Hadoop/Spark - Local Job Opening in Chennai

Big Data Engineer Hadoop/Spark



Job description

<p><p><b>Position Overview :</b><br/><br/>We are looking for a skilled and detail-oriented Big Data Engineer to design, develop, and maintain scalable data pipelines and architectures.

</p><p><br/></p><p>The role involves working with large datasets, integrating diverse data sources, and ensuring data availability for analytics, machine learning, and business intelligence.

</p><p><br/></p><p>The ideal candidate will have strong expertise in big data technologies and the ability to collaborate with cross-functional teams to deliver data-driven solutions.<br/><br/><b>Key Responsibilities :</b><br/><br/>- Design, build, and optimize large-scale data pipelines for ingestion, transformation, and storage.<br/><br/></p><p>- Work with structured and unstructured data across multiple platforms and sources.<br/><br/></p><p>- Implement and maintain data lake and data warehouse solutions.<br/><br/></p><p>- Develop scalable ETL (Extract, Transform, Load) processes.<br/><br/></p><p>- Ensure data quality, security, and governance across systems.<br/><br/></p><p>- Collaborate with data scientists, analysts, and business teams to provide reliable data solutions.<br/><br/></p><p>- Monitor and optimize the performance of big data clusters and processing jobs.<br/><br/></p><p>- Research and integrate emerging big data tools and technologies.<br/><br/></p><p>- Troubleshoot and resolve issues related to data processing and storage.<br/><br/><b>Key Skills & Competencies :</b><br/><br/>- Strong knowledge of big data frameworks (Hadoop, Spark, Flink, Kafka).<br/><br/></p><p>- Hands-on experience with cloud platforms (AWS, Azure, GCP).<br/><br/></p><p>- Proficiency in SQL, NoSQL databases (MongoDB, Cassandra, HBase).<br/><br/></p><p>- Strong programming skills in Python, Java, or Scala.<br/><br/></p><p>- Knowledge of ETL tools and data integration techniques.<br/><br/></p><p>- Familiarity with data warehousing solutions (Snowflake, Redshift, BigQuery).<br/><br/></p><p>- Problem-solving and analytical skills with attention to detail.<br/><br/></p><p>- Ability to work in cross-functional teams and manage multiple projects.<br/><br/><b>Qualifications :</b><br/><br/>- Bachelors or Masters degree in Computer Science, Data Engineering, Information Technology, or a related field.<br/><br/></p><p>- 3 to 6 years of experience in data engineering or big data roles.<br/><br/></p><p>- Proven track record in handling large, complex datasets.<br/><br/></p><p>- Certifications in big data or cloud platforms (preferred but not mandatory).</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Big Data Potential: Insight & Career Growth Guide