< Back to Schedule
Do you have in JSON, AVRO, XML in file, database, or in Hadoop? Come and talk to us about your semi-structure database needs!
With over a billion risk points a day and data stored in multiple platforms for different asset classes, the challenge at Barclays was to get a daily consolidated view of their risk exposure, and also provide a way for analysts to interactively drill down to transaction level details to identify issues. Chuck Chakrabarti, Vice President at Barclays, will describe their move to a big data architecture, the challenges faced and lessons learned, and their evaluation of a solution using Kyvos, Tableau, and Cloudera. In this talk, he will cover how to operationalize big data in your organization, common pitfalls to watch out for, tools and methods that worked and didn’t work, and requirements for self-service analytics.
How do you manage data and deliver insights when data volumes are exploding? Do you think this might need more than a "traditional" approach? With more than 30 million members and hundreds of terabytes of data, Ebates moved to a non-traditional approach out of necessity. What does that look like, and how do you keep savvy business users happy when you make the leap?
In this session we will share why and how we successfully transitioned from traditional BI on a traditional data-warehouse, to all in self-service BI on Hadoop.
Join us to learn:
- WHY: Why we chose to run BI on Hadoop
- HOW: How we made the transition; the plan, challenges and end-goals
- WHAT: What we did, achieved, and lessons learned
PepsiCo partners with the country’s biggest retailers, importing huge volumes of shipping, inventory, and POS data to forecast sales. Quickly and accurately visualizing this data to inform our retail partners of sales trends and the impact of promotions is critical to maintaining transparent business relationships with them. In order to accelerate time to insights, we developed a streamlined big data strategy that leverages an enterprise-wide Hadoop data processing platform, data wrangling solution (Trifacta) and data visualization application (Tableau) to scale existing operations. We can now understand and structure data in record time, with overall reporting build time dropping an astounding 90%. This allows us to dive deeper into Tableau, create faster and more accurate data visualizations, and focus on data storytelling instead of piecing that data together. Come to this session to find out how we did it.
Is your organization drowning in the data lake too? Leading big data organizations are leveraging a multi-tiered approach to data for consumption in Tableau. In this session, we will:
- Discuss the challenges of visualizing big data
- Establish best practices for leveraging Hadoop/Spark directly and working with data warehouses
- Talk about in-memory Tableau data extracts for fast visual analysis in Tableau
This session is also available at the following time(s): Thursday at 12:00 pm
Applying big data to an internal business use case is challenging and requires expertise and focus. Even harder is scaling it out across a global enterprise. In this session, we'll explain how GE Power Services has been able to deliver results in an uncertain world by leveraging big data and scaling its platform across a global employee base that spans over 86 countries and 22,000 users.
Traditionally, relational database management systems have been the bread and butter of data analysis, capturing data in a “structured” form consisting of tables of data connected by common values specified in SQL join statements. In the last few years, enterprises have been adding semi-structured data to the mix as JSON files, NoSQL databases, and in Hadoop, among others. Until recently, data analysis tools haven’t been able to directly access semi-structured data in its native form. Semi-structured data had to be reshaped into structured tables. Using JSON files and the new JSON Tableau connector we’ll explore what it means to do data analysis on semi-structured data and how it compares to the more familiar experience of analyzing structured/tabular data.
This session is also available at the following time(s): Wednesday at 10:45 am
In the game of fraud, all (relevant) data must be analyzed. According to the ACFE's Report to the Nations on Occupational Fraud and Abuse: the typical organization loses 5% of revenue in a given year as a result of fraud. Fraud may be big business, but so too is big data as a means to combat incidents of fraud.
Most organizations, regardless of size, will experience some type of fraud during the course of their business. These same organizations likely store many volumes of data that could be leveraged to combat incidents of fraud. In this session, we will explore the fundamental definition of data and its relationship to fraud. We will then explore how data analytics, or in this case fraud analytics, can prevent, deter, and detect incidents of fraud.
In this session we will dive into how the Netflix Data Engineering and Analytics team uses Tableau, Spark, Hive, Presto, and Redshift to deliver rapid analytics and insight. We will walk through a number of lessons that we have learned to stay agile and deliver quickly in a cloud big data environment with over 60 petabytes of data. We will also share our overall experience using Tableau to partner with our business users.
This session is also available at the following time(s): Thursday at 10:30 am
We will explore the authentication and authorization options for several of the Apache Hadoop Distributions. Topics will include Kerberos, Knox, Sentry, Ranger integation, Delegation for Impala, and other ways to provide secure access to your Hadoop Platform.
Managing on-shelf availability (OSA) at the national level, not just for Oreos but for ALL Mondelez products, is a monumental undertaking in Excel. If that wasn't challenging enough, there are many root causes behind low OSA: Store Execution and Merchandising, Ordering and Distribution, Manufacturing, and accurate Demand Forecasting to name a few.
It's also difficult to understand the inventory processes of the different retailers, e.g. Kroger, Target, Safeway, etc. To manage the volume of data and complexities of OSA, Mondelez leveraged Redshift and Tableau to integrate internal and external data sources and pinpoint tactical steps for our field force to execute. Join us to learn how we overcame the Big Data hurdle, improved order quality through more accurate demand forecasting, and applied agile methodologies to create effective Tableau dashboards which resulted in recapturing millions in lost sales.
Stop by and make an appointmet: come tell us about your semi-structure database need. JSON, AVRO, XML in file, database, or Hadoop?
This session will tell the story of first two years of Tableau at RueLaLa - what we did well, what we would have done differently, and how we’ve adapted to our growing needs.
- How we rolled out across 200 users with very wide range of data literacy and limited resources
- Increasing interest through powerful visualizations (in a land of Excel sheets)
- Creating the right data at the right level of detail
- Naming standards and workplace organization
- Moving beyond data in the enterprise data warehouse & delivering now
- Connecting Tableau to Snowflake
- Finding the best tool for the job
- Creating dashboards using live queries and online sources
This session is also available at the following time(s): Tuesday at 12:15 pm
How to best leverage big data is something that all companies are trying to solve. EMC is no exception to this. However, the innovative Tableau solution that EMC business teams have implemented today are allowing analysts, business users, and executives to leverage big data in a way that is turning analytical dashboards into revenue generating products. This session will present how a major EMC business organization is leveraging Tableau and injecting innovation to solve the customer demands of data analytics through visualization.
As GoPro expands into content networks and launches new products, new sets of challenges appear. One of the most critical challenges facing GoPro during this period of rapid growth is their ability to make effective use of massive amounts of data.
In the past, it took GoPro months to understand new inbound data and determine how it needed to be transformed or augmented for analysis. To streamline this process, GoPro is creating an analysis loop, which informs product usage trends, and product insights. This serves a large ecosystem of GoPro executives, product managers, engineers, data scientists, and business analysts. And utilizes an integrated technology pipeline consisting of Apache Kafka, Spark Streaming, Cloudera’s distribution of Hadoop, with Tableau as the end user analytics tool.
New display technology is allowing users to interact with data in exciting and immersive ways. Whereas the focus of many organizations is 'miniaturization' of dashboards to fit on mobile devices, this presentation focuses on the development of 'big' visualizations for 'big data.' Using touch and gesture tracking projector technology we provide examples of dashboards that extend up to 4000 pixels across 24 feet of projection. We will specifically review the technical details to develop dashboards in Tableau that fully utilize the capabilities of extended interactive projector technology.
With Amazon Web Services and Amazon Redshift, you can create a massively scalable, cloud-based data warehouse in just a few clicks. With the real-time responsiveness of Tableau, you can gain insights from that data just as easily. Tableau connects to Amazon Redshift natively for flexibility, scalability, and accelerating results. In this session, you’ll learn how to use Amazon Redshift and Tableau together in AWS cloud to perform analysis on your data. We’ll also cover other services in the AWS ecosystem like Amazon RDS, Amazon Aurora and Amazon EMR that you can leverage for all you data and analysis needs. Please bring your own laptop to follow along with the hands-on exercises.
Fast, modern and agile are not terms most people would associate with the large banks…until now. In this session, you’ll learn how Tableau was deployed globally at JPMorgan Chase, to empower employees across all lines of business and all levels. All while providing levels of control and the data governance expected of regulated financial institutions.
We’ll explore the journey of growing Tableau, see examples of how Tableau is being connected to a wide variety of data sources, and how we build engaging dashboards that communicate insights and tell compelling stories. Learn what contributed to our success, what didn’t work, and where we’re headed next.
The Wayfair Analytics team is hyper-focused on performance, as we serve 3 thousand internal end-users and provide them with the insights and visualizations they need to make Wayfair awesome. Some of our most business critical vizzes hit against 16+ billion records of real-time clickstream data and they load in less time than it takes you to sing "Everything's better, when it ships free."
In this session we will cover:
- Wayfair's strategy for designing visualizations for optimal performance and maximum intuitiveness
- How we enable fast on-demand customer segmentation against a 16+ billion record real-time dataset
- What's next on the big data frontier at Wayfair
At Lionsgate, Tableau has caused an explosion in data sharing and collaboration. Teams are now sharing new findings with other departments and blending data from different sources for a more enterprise understanding of our products.
How did we get here? Attendees of this session will be given our keys to BI success.
At the highest level, attendees will learn about:
- Bottom up vs the top down approach to BI
- How to successfully build a data driven culture
- How to implement big data and have an immediate ROI
- How to rollout an enterprise reporting solution
- How to use POCs and training to build a Tableau following with your business
- How to implement a data science club
- How to successfully implement social media dashboards
My brick oven is connected to the cloud...WHAT? This session is a light-hearted learning exercise with the serious intent of showing how easy it is to connect a sensor device to the Amazon IoT (Internet of Things) framework using Node.JS, Lambda and RDS/Redshift to ingest real-time sensor data and then visualize it in Tableau. You'll be able to apply these lessons to other platforms and use cases too.
Do you want to enable your business leaders to make data driven workforce decisions quickly and effectively, without having to combine multiple sources in Excel? In this session, you will learn how VMware enriched their workforce data by combining external recruiting, equity, and financial planning data by leveraging Workday Big Data Analytics to create a consolidated, robust, and high quality data source for Tableau visualizations. Enabling business leaders and HR business partners to make data driven people decisions.
Stop by and make an appointmet: Tell us about your semi-structure database need. JSON, AVRO, XML in file, database, or Hadoop?
Note: Content is still being added to the schedule. Sessions, times, and locations are subject to change.