Uber Careers Software Engineer 2025

Job Summary: Software Engineer II, Data

CategoryDetails
Job TitleSoftware Engineer II, Data
LocationHyderabad, India
Employment TypeFull-time
Work ModelHybrid (Minimum 50% in-office)
Required Skills– Proficiency in a general-purpose language (Java, Python, Go, etc.)
– Understanding of Data Tech Stack (Spark, Hive)
– Strong problem-solving and coding fundamentals
Desired Skills– Data Warehousing expertise
– Advanced skills in Spark and Hive
– Strong scripting abilities
– Consultative and advisory skills
Education RequirementsBachelor’s degree in Computer Science or a related technical field, or equivalent practical experience
Experience RequiredProven experience in a data engineering or similar software engineering role
Key Responsibilities– Build and manage batch & real-time data products
– Develop and standardize core business metrics
– Optimize data systems for performance, cost, and quality
– Advise product teams on data engineering best practices
Benefits / Work Culture– Mission-driven company
– Collaborative, office-centric culture
– Opportunities for growth and impact
– Focus on bold ideas and speed

In the 21st century, data is the lifeblood of innovation. It’s the silent pulse behind every smart recommendation, every optimized route, and every strategic business decision. At Uber, a company fundamentally built on solving complex, real-world logistics problems, this is especially true. Every tap of the app, every mile driven, every delivery completed generates a ripple of data. This raw information, in its petabyte-scale torrent, holds the key to making transportation and delivery safer, faster, and more reliable for everyone. But raw data, much like crude oil, is of limited value in its unrefined state. Its immense potential is only unlocked through meticulous, sophisticated, and scalable refinement.

This is where you come in.

We are seeking a Software Engineer II, Data to join our Delivery Data Solutions team in the thriving tech hub of Hyderabad. This is not merely a back-end role focused on processing logs; it is a strategic, foundational position at the very heart of Uber’s operational intelligence. You will be an architect of clarity in a world of complexity. Your mission will be to transform the chaotic flood of raw event data into a clean, reliable, and powerful stream of structured information—the “canonical data sets” that become the single, undeniable source of truth for the entire Uber Delivery organization.

Imagine the impact: the metrics you develop will be used by executives to chart the company’s global strategy. The real-time pipelines you build will empower data scientists to train machine learning models that shave precious minutes off delivery times. The data quality standards you enforce will ensure that product managers can trust the numbers they see, leading to better, more informed decisions about which features to build next. You will be building the foundational infrastructure that powers analytics, metrics, ML models, and KPIs for dozens of teams. If you are a engineer who craves ownership, thrives on solving puzzles of scale, and wants to see your work have immediate, tangible impact on a global stage, this role is your calling.

About Company: Moving the World Forward, Together

To understand the significance of this role, one must first appreciate the scale and ambition of Uber itself. Born from a simple idea to tap a button and get a ride, Uber has exploded into a global technology platform that is fundamentally reshaping urban mobility and logistics. Our mission is audacious in its scope: “to reimagine the way the world moves for the better.” This goes far beyond ride-hailing. Today, Uber is a multifaceted ecosystem encompassing:

  • Rides: Connecting riders with drivers for personal transportation.
  • Uber Eats: Delivering meals from beloved local restaurants and national chains.
  • Grocery & Goods: Bringing supermarket shelves and convenience stores to your doorstep.
  • Freight: Matching trucking companies with loads to optimize the shipping industry.

This vast, interconnected network generates one of the most complex and interesting data sets on the planet. We are, at our core, a company that solves physical world problems through digital world innovation. The challenges we tackle—from predicting ETA with incredible accuracy and dynamically pricing in real-time to optimizing a delivery route for a driver carrying multiple orders—are at the forefront of computer science and engineering.

Our culture is the engine that drives this innovation. It’s built on a few core principles:

  • Bold Ideas: We celebrate those who challenge convention and think on a grand scale. A concept sketched on a whiteboard in Hyderabad can become a pilot program in São Paulo and a global feature rollout in a matter of months. We believe that progress requires a degree of calculated risk-taking and a willingness to venture into the unknown.
  • Customer-Obsessed Impact: We are relentlessly focused on creating real-world value for our users. Whether it’s making a rider’s journey safer, a driver’s earnings more predictable, or a restaurant’s reach wider, our work is grounded in tangible outcomes. “Impact” is our favorite word, and it’s measured by the positive changes we create in the lives of millions.
  • Speed as a Fuel: In the fast-moving world of technology, speed is a competitive advantage. We operate with a bias for action, favoring iterative progress over perfection. We build, we measure, we learn, and we adapt—quickly.
  • Together: The phrase “We win as a team” is not a cliché at Uber; it’s an operating principle. The problems we solve are too complex for any one person to tackle alone. We rely on a diversity of thought, background, and expertise. Collaboration, open feedback, and mutual respect are the bedrocks of our environment, creating a sense of belonging and shared purpose.

Key Responsibilities in Detail: The Architect of Data Clarity

As a Software Engineer II on the Delivery Data Solutions team, your role is both deep and broad. You are a builder, an optimizer, a consultant, and a guardian. Your work ensures that the entire organization can see clearly and act decisively based on high-quality data.

1. Building Foundational Data Products for Batch & Real-Time Use Cases

You are not just writing ETL scripts; you are engineering robust, scalable data products that serve as the backbone for critical business functions. This responsibility is split across two temporal domains, each with its own challenges and requirements:

  • Batch Processing: The Foundation of Intelligence. Here, you will be building and maintaining large-scale, scheduled data workflows that process terabytes of historical data. These pipelines are the workhorses of business intelligence, powering everything from executive dashboards to deep-dive analytical reports.
    • Example in Action: Imagine a product team launches a new “Priority Delivery” feature. Your task would be to build a pipeline that ingests weeks of order data, joins it with driver location data and restaurant preparation time data, and aggregates it all to create a canonical dataset. This dataset would then be used to answer crucial questions: What is the average time saved for Priority orders? How does it impact driver efficiency? What is the feature’s overall adoption rate? Your pipeline transforms billions of raw event rows into a clean, queryable table that tells the story of this product’s success.
  • Real-Time Processing: The Pulse of the Operation. The Uber platform is live and dynamic. Decisions need to be made in seconds, not hours. You will be developing low-latency streaming applications using technologies like Apache Flink or Kafka Streams to provide immediate insights.
    • Example in Action: Consider the “Live Map” in the Uber Eats app that shows a delivery partner’s movement in real-time. Behind that map is a complex real-time data pipeline you might work on. This pipeline processes a continuous stream of location pings from millions of devices, enriches them with order context, and calculates live ETAs. It powers operational dashboards that allow city managers to identify delivery congestion zones in real-time, enabling them to proactively message drivers and customers about delays.

2. Metrics Development and Standardization: The Keeper of the Truth

In an organization as large and fast-moving as Uber, a common pitfall is metric confusion. When different teams calculate the same metric in slightly different ways, it leads to misalignment, debates about data, and poor decision-making. Your team acts as the official “bureau of standards” for the Delivery org.

  • The Process: You will collaborate closely with data scientists and business leaders to formally define core business metrics like “Monthly Active Couriers,” “Restaurant On-Time Performance,” or “Order Acceptance Rate.” This involves deep discussions to ensure the definition is unambiguous and business-relevant.
  • The Execution: Once defined, you will codify this logic into a production-grade data model. You don’t just document that “Gross Bookings” should exclude cancellations; you build the pipeline that authoritatively calculates it, creating the “source of truth” table that everyone is mandated to use. This eliminates confusion and ensures that when the CEO reviews a dashboard and a product team reviews their A/B test results, they are looking at the same numbers.

3. System Optimization and Data Quality Advocacy: The Pursuit of Excellence

Building a pipeline is the first step; perfecting it is an ongoing journey. This responsibility is about being a steward of Uber’s resources and a guardian of trust in our data.

  • Performance & Cost Optimization: At Uber’s scale, a poorly written query can waste tens of thousands of dollars in computational resources. You will be tasked with profiling existing Spark jobs, identifying bottlenecks (like data skew where 90% of the data goes to one node), and refactoring them for efficiency. This could involve implementing better data partitioning strategies, choosing more efficient file formats like Apache Parquet, or caching intermediate datasets. Your work directly contributes to the company’s bottom line.
  • SLA Adherence: Downstream teams build their products and reports on the promise that your data will be available by a certain time. You are responsible for ensuring your pipelines meet these strict Service Level Agreements (SLAs). This involves building robust monitoring, setting up alerting for failures, and designing systems with fault tolerance and graceful recovery in mind.
  • Data Quality Guardianship: “Garbage in, garbage out” is a cardinal sin in data engineering. You will be the frontline defense. This means embedding data quality checks directly into your pipelines. Is the order_total field suddenly negative? Is the number of records from a specific city dropping to zero? Are duplicate records appearing? You will build systems that automatically validate incoming data, flag anomalies, and, in critical cases, halt a pipeline to prevent the propagation of bad data, thereby protecting the integrity of thousands of downstream decisions.

4. Consulting and Advising on Data Engineering Practices: The Force Multiplier

Perhaps the most unique aspect of this “horizontal” team role is its outward-facing, consultative nature. Your impact is not limited to the code you write yourself; it is amplified through the work you enable others to do.

  • The Internal Consultant: Product engineering teams across the Delivery org will come to you for guidance. A team in San Francisco building a new feature for group orders will need your advice on how to instrument their service to emit the right events. You will review their data models, suggest best practices for schema design, and ensure their new data integrates seamlessly into the broader ecosystem.
  • The Evangelist: Your team will develop tools, libraries, and processes to make data work easier and more consistent for everyone. Part of your role is to champion these solutions—conducting tech talks, writing documentation, and working one-on-one with teams to onboard them. By raising the data engineering IQ of the entire organization, you create a ripple effect of quality and efficiency.

Required Skills and Qualifications: The Non-Negotiable Foundation

To hit the ground running and begin making a meaningful contribution within your first few months, a candidate must possess a strong and demonstrable foundation in the following areas:

  • Strong Software Engineering Fundamentals (Beyond Scripting): This role requires a mindset of building robust, testable, and maintainable systems, not just writing one-off scripts. A Bachelor’s degree in Computer Science or a related field provides this foundation, but we equally value equivalent practical experience gained through impactful work.
    • Coding Proficiency: You must be highly proficient in at least one general-purpose programming language such as Java, Python, or Go. We use these languages for building production-grade data processing applications. For instance, you should be comfortable with concepts like object-oriented design, unit testing, concurrency, and using complex data structures to solve problems efficiently.
  • Core Data Tech Stack Understanding: You need a solid, practical understanding of the modern big data ecosystem. This isn’t about having these words on your resume; it’s about knowing how to use them effectively.
    • Apache Spark: You must understand Spark’s core abstractions (RDDs, DataFrames), its execution model (driver, executors), and how to write efficient transformations and actions. You should know how to avoid common pitfalls like shuffling large datasets.
    • Apache Hive: Experience with Hive is crucial for interacting with our data warehouse. You should be adept at writing and optimizing HiveQL queries, understanding how partitioning and bucketing work, and how Hive manages data in a distributed file system like HDFS.
  • Analytical and Problem-Solving Mindset: Data engineering is a continuous series of puzzles. You will face issues like debugging a job that runs fine on a small sample but fails mysteriously at full scale, or designing a system to handle a 10x traffic spike during a holiday promotion. You need a methodical, analytical approach to deconstructing these problems, forming hypotheses, and validating solutions. A love for deep, logical thinking is essential.
  • Collaborative Communication Skills: The “consultative” aspect of this role cannot be overstated. You must be able to listen to a problem from a non-technical stakeholder, understand the underlying business need, and translate that into a technical solution. Conversely, you need to be able to explain a complex technical constraint or a data quality issue to that same stakeholder in clear, accessible language. The ability to build rapport and work effectively across different functions is a critical requirement for success.

Desired Skills / Nice-to-Have: What Will Make You Stand Out

While the following skills are not mandatory for application, they are highly valued and will significantly accelerate your ability to make a broad and deep impact from day one.

  • Deep Data Warehousing Expertise: Knowledge of the principles of data warehouse design elevates you from a pipeline coder to a data architect. Familiarity with concepts like dimensional modeling (designing fact and dimension tables in star/snowflake schemas), handling slowly changing dimensions (Type 1, Type 2), and building curated data marts for specific business domains is a major advantage. It shows you can think holistically about how data will be consumed, not just how it is produced.
  • Advanced Proficiency in Spark and Hive: Going beyond basic usage to true mastery. This includes:
    • Spark Performance Tuning: Expertise in diagnosing and resolving performance issues by adjusting configurations related to memory management, parallelism, and garbage collection. Understanding the Spark UI to identify slow stages and tasks is key.
    • Spark Internals: A deeper understanding of the Catalyst optimizer and Tungsten execution engine can help you write code that is automatically optimized by Spark itself.
    • Hive Optimization: Advanced skills in writing efficient Hive queries, using techniques like predicate pushdown, and designing optimal table structures to minimize data scanned per query.
  • Exceptional Scripting and Automation Skills: The ability to quickly whip up a Python script to automate a manual data validation check, a Bash script to orchestrate a series of CLI commands, or a simple web tool to let other teams check pipeline status is incredibly valuable. It demonstrates a proactive approach to eliminating toil and scaling your own effectiveness.
  • A Natural Consultative and Mentoring Mindset: We look for individuals who derive satisfaction from helping others succeed. Do you enjoy explaining a complex concept to a junior engineer? Are you patient when walking a product manager through the nuances of a metric definition? A candidate who is a natural teacher and a force multiplier for the team’s knowledge will thrive in this environment and quickly become an indispensable leader.

Team Collaboration and Work Environment: The Power of the Hub

The Delivery Data Solutions team is what is known as a “horizontal” or “platform” team. This means our “product” is the data infrastructure and services that enable other teams—the “vertical” teams who own customer-facing features—to excel. This structure defines your daily interactions and your sphere of influence.

Your key collaborators will include:

  • Data Scientists: They are your primary customers for clean, feature-rich datasets. You will work with them to understand the data requirements for their machine learning models (e.g., predicting restaurant preparation time) and ensure they have access to timely and accurate data for training and inference.
  • Product Managers: They rely on the metrics and dashboards you power to measure the success of their initiatives and to decide what to build next. A strong partnership here ensures that the data you produce directly influences product strategy.
  • Software Engineers (Product Teams): You will act as a trusted advisor to these teams, coaching them on data best practices. This symbiotic relationship is critical: they build the features that generate the data, and you help them do it right and then transform that data into intelligence.
  • Business Operations & Strategy: These teams use your data for high-stakes analysis, from planning market expansions to assessing the competitive landscape. The accuracy of your work directly impacts multi-million dollar investment decisions.

This position is based in our Hyderabad office, a strategic and vibrant technology center for Uber. We believe that our offices are more than just places to work; they are the cultural and collaborative hearts of our company. The spontaneous “whiteboard moments,” the quick hallway conversations that unblock a problem, and the team lunches that build camaraderie are irreplaceable. Therefore, we operate on a hybrid work model, with the expectation that employees spend at least 50% of their working time in the office. This model is designed to strike a balance, offering the flexibility for focused deep work while preserving the magic of in-person connection that fuels our innovation.

Career Growth and Learning Opportunities: Your Trajectory at Uber

Uber is committed to the growth and development of its employees. We view this role not as a static position, but as a dynamic platform for your professional evolution.

  • Technical Depth and Mastery: There is no better environment to become a world-class expert in distributed data systems than at Uber. The sheer scale, complexity, and real-world criticality of our data challenges provide a learning opportunity unmatched anywhere else. You will gain hands-on experience with cutting-edge technologies and architectural patterns that are defining the future of data engineering.
  • Architectural Leadership: As you progress, your responsibilities will shift from implementing components to designing systems. You will be encouraged to take ownership of major technical projects, propose new architectural visions for our data platform, and make key technology selection decisions that will impact the organization for years to come.
  • Mentorship and People Leadership: You will be surrounded by some of the brightest minds in the industry. The learning happens every day through code reviews, design doc feedback, and technical discussions. As you advance, you will have the opportunity to pay it forward by mentoring new hires and junior engineers. For those inclined, this can be a pathway to a Tech Lead role or formal Engineering Management, where you guide and grow a team of engineers.
  • Clear Pathways for Advancement: The career ladder at Uber is transparent. Success as a Software Engineer II naturally leads to progression to Senior Software Engineer, where you are expected to tackle projects of greater scope and ambiguity. Beyond that, roles like Staff EngineerPrincipal Engineer, and Engineering Manager become attainable, each with increasing levels of responsibility, impact, and leadership.

Work Culture, Benefits, and People-First Environment

We believe that to do the best work of your life, you need to be supported in all aspects of your life. Our comprehensive benefits and unique culture are designed with this holistic view in mind.

  • A Culture of Impact and Ownership: From your first week, you will be entrusted with significant responsibility. You won’t be a cog in a machine; you will be an owner of a critical piece of Uber’s data infrastructure. This sense of ownership is empowering and is a direct reflection of our trust in your abilities.
  • Inclusive and Diverse Community: We are dedicated to building a workplace that reflects the diverse communities we serve. We have active employee resource groups (ERGs) and ongoing initiatives to ensure everyone feels they belong and can bring their authentic selves to work.
  • Competitive Compensation: We offer a competitive salary and a valuable equity package (RSUs), because we believe our employees should share in the success they help create.
  • Comprehensive Health and Wellness Benefits: Your well-being is a priority. We provide extensive health insurance for you and your family, mental health resources, wellness reimbursements, and parental leave policies to support you through all of life’s stages.
  • Continuous Learning Stipend: We provide generous financial support for your ongoing development, whether it’s for attending industry conferences, enrolling in online courses, or purchasing books. Your growth is our growth.
  • The Office Experience: Our Hyderabad office is designed to be a vibrant community hub. It features state-of-the-art workspaces, collaborative areas, quiet zones for focus, and social spaces to connect with colleagues. We offer meals and snacks to keep you energized throughout the day.
  • Commitment to Accessibility: We are committed to creating an accessible and inclusive experience for all applicants. If you require accommodations during the application process due to a medical condition or religious practices, please contact us at accommodations@uber.com. We are here to support you.

Application Process and Tips for Candidates

We have designed our interview process to be thorough, fair, and a two-way street. It’s an opportunity for us to assess your skills and for you to evaluate if Uber is the right place for you to grow.

The typical process flows as follows:

  1. Online Application: You’ve taken the first step! Submit your resume and application through our careers portal.
  2. Recruiter Screen (30 minutes): A phone call with a recruiter to discuss your background, your interest in Uber and the role, and to provide a high-level overview of the team and process. This is a great time to ask initial questions about the culture and the team’s charter.
  3. Technical Phone Screen (60 minutes): A video call with one of our engineers. You’ll be asked to solve a coding problem using a collaborative editor like CoderPad. This assesses your problem-solving approach, coding skills, and familiarity with data structures and algorithms.
  4. Virtual Onsite Interview (3-4 sessions, 45-60 minutes each): This is the core of the process, consisting of several focused interviews:
    • Data Coding & Problem Solving: A deeper dive into coding, often with a data manipulation twist.
    • Data Modeling: You’ll be given a business scenario (e.g., “Design a system to track Uber Eats orders from request to delivery”) and asked to design the underlying database tables and schemas. We’re looking for your ability to think about relationships, scalability, and query efficiency.
    • System Design: You’ll discuss how you would architect a large-scale data system (e.g., “Design a service to calculate real-time surge pricing”). This evaluates your ability to think about components, trade-offs, scalability, and failure scenarios.
    • Behavioral Interview: We’ll explore your past experiences through questions about your work on teams, how you handled challenging situations, how you collaborated with others, and how you drove projects to completion. Use the STAR (Situation, Task, Action, Result) method to structure your answers.

Tips for a Standout Application and Interview:

  • Tailor Your Resume: Don’t just list your job duties. For each relevant role, highlight a project where you used Spark, Hive, or Java/Python for data engineering. Quantify your impact. Instead of “Worked on a data pipeline,” write “Designed and built a Spark streaming pipeline processing 2TB/day, reducing end-to-end data latency by 60% and enabling real-time fraud detection.”
  • Prepare Your “Data” Story: Be ready to talk in detail about your hands-on experience with the data tech stack. For a project involving Spark, be prepared to discuss: the scale of the data, the business problem you were solving, any performance challenges you faced (e.g., data skew), and how you optimized the job.
  • Communicate Your Thought Process: During technical interviews, we care more about how you think than the immediate answer. Talk through your reasoning. Ask clarifying questions. If you hit a roadblock, explain how you would debug the issue. This collaborative approach is highly valued.
  • Be Curious and Prepared: Come to your interviews with thoughtful questions that show you’ve done your research and are genuinely interested in the role. Ask about the team’s biggest current technical challenges, the career paths of your interviewers, or how the team measures its own success.
  • Showcase Your Collaborative Spirit: In the behavioral interview, use your past experiences to demonstrate that you are a team player. Talk about times you helped a colleague, mentored someone, or navigated a disagreement to reach a better outcome.

Conclusion / Call to Action

The journey to reimagine how the world moves is a grand challenge, one of the most exciting engineering puzzles of our time. This journey is powered by data—data that needs to be refined, trusted, and transformed into intelligence. The Software Engineer II, Data role on the Delivery Data Solutions team is a unique opportunity to be a core architect of this intelligence engine. You will be entrusted with the data that drives our most critical decisions, from the strategic to the operational.

You will be given the autonomy to innovate, the support to take calculated risks, and the platform to see your work make a difference for millions of people around the globe. You will be surrounded by a team that will challenge you, support you, and celebrate your successes.

If you are a data engineer who is not satisfied with just processing data, but wants to own it, perfect it, and use it to empower an entire organization, then this is your moment.

The next step is yours. We are excited to learn about your story, your skills, and your passion.

Apply Now

Hi, I’m VaraPrasad. At Fresher Jobs Hub, I share the latest campus drives, off-campus hiring, and entry-level job opportunities for students and recent graduates. My goal is to make the job hunting simpler for graduates by bringing all the latest opportunities into one place.

Leave a Comment