Category Archives: Technology

Google working on a new version of Android for cars

Google is laying the groundwork for a version of Android that would be built directly into cars, sources said, allowing drivers to enjoy all the benefits of the internet without even plugging in their smartphones.

The move is a major step up from Google’s current Android Auto software, which comes with the latest version of its smartphone operating system and requires a phone to be plugged into a compatible car with a built-in screen to access streaming music, maps and other apps.

Google, however, has never provided details or a timeframe for its long-term plan to put Android Auto directly into cars. The company now plans to do so when it rolls out the next version of its operating system, dubbed Android M, expected in a year or so, two people with knowledge of the matter said.

The sources declined to be identified because they were not authorized to discuss the plans publicly.

“It provides a much stronger foothold for Google to really be part of the vehicle rather than being an add-on,” said Thilo Koslowski, vice president and Automotive Practice Leader of industry research firm Gartner, who noted that he was unaware of Google’s latest plans in this area.

If successful, Android would become the standard system powering a car’s entertainment and navigation features, solidifying Google’s position in a new market where it is competing with arch-rival Apple Inc. Google could also potentially access the valuable trove of data collected by a vehicle.

Direct integration into cars ensures that drivers will use Google’s services every time they turn on the ignition, without having to plug in the phone. It could allow Google to make more use of a car’s camera, sensors, fuel gauge, and Internet connections that come with some newer car models.

Analysts said Google’s plan could face various technical and business challenges, including convincing automakers to integrate its services so tightly into their vehicles.

Google declined to comment.

Technology companies are racing to design appliances, wristwatches and other gadgets that connect to the Internet. Automobiles are a particularly attractive prospect because Americans spend nearly 50 minutes per day on average on their commute, according to US Census data.

Apple unveiled its CarPlay software in March and Google has signed on dozens of companies, including Hyundai, General Motors Co and Nissan Motor Co, for its Open Automotive Alliance and its Android Auto product.

Android Auto and CarPlay both currently “project” their smartphone apps onto the car’s screen. Many of the first compatible cars with this smartphone plug-in functionality are expected to be on display at the upcoming Consumer Electronics Show in Las Vegas next month and to go on sale in 2015.

By building Android into a car, Google’s services would not be at risk of switching off when a smartphone battery runs out of power, for example.

“With embedded it’s always on, always there,” said one of the sources, referring to the built-in version of Android Auto. “You don’t have to depend on your phone being there and on.”

Google’s software could potentially connect to other car components, allowing, for example, a built-in navigation system like Google Maps to detect when fuel is low and provide directions to the nearest gas stations.

By tapping into the car’s components, Google could also gain valuable information to feed its data-hungry advertising business model. “You can get access to GPS location, where you stop, where you travel everyday, your speed, your fuel level, where you stop for gas,” one of the sources said.

But the source noted that Android would need major improvements in performance and stability for carmakers to adopt it. In particular, Android Auto would need to power-up instantly when the driver turns the car on, instead of having to wait more than 30 seconds, as happens with many smartphones.

Automakers might also be wary of giving Google access to in-car components that could raise safety and liability concerns, and be reluctant to give Google such a prime spot in their vehicles.

“Automakers want to keep their brand appeal and keep their differentiation,” said Mark Boyadjis, an analyst with industry research firm IHS Automotive. “Automakers don’t want to have a state of the industry where you get in any vehicle and it’s just the same experience wherever you go.”

Via: http://timesofindia.indiatimes.com/tech/tech-news/Google-working-on-a-new-version-of-Android-for-cars/articleshow/45570340.cms

 

Advertisements

Speed up your site with Chrome DevTools

Visual jitter ruins the experience of navigating a site. Addy Osmani reveals how to use Chrome DevTools to get your designs running at a steady 60fps.

Whether it’s on desktop or mobile, users want their web experience to be snappy, smooth and delightful. Even if the browser is busy rendering the page or loading in content, the user should still be able to scroll around and interact with it without any slow-down. No one likes seeing visual glitches.

Low or inconsistent frame rates affect not only user experience but user engagement: something that large sites like Flickr are increasingly starting to address. In this article, we will explore how to apply the lessons they have learned to your own sites.

Measurement is the most important part of any performance-profiling work. This article focuses on how to do this within Chrome DevTools. However, always test your sites and apps using the tools in other browsers to check if any issues are browser-specific.

Large websites are increasingly optimising their code to avoid users experiencing low or inconsistent frame rates while navigating

What is jank?

The human eye perceives a continuous stream of information. It does not naturally see motion as a series of frames. In the worlds of animation, film and gaming, using a series of still frames to simulate motion creates some interesting perceptual artifacts – especially if those frames are played back too slowly, or at an inconsistent rate. When the frame rate varies, movements can look jerky, and images can appear to jitter.

For an optimal user experience, animations must be silky, scrolling must be buttery-smooth, and your page must contain little or no ‘jank’ – visual disruption caused by variation in frame rate.

On the web, a low frame rate (or a janky experience) means that the human eye can make out individual frames. Giving users a jank-free experience often comes down to creating sites and applications that can run at a steady 60fps, similar to videogames.

At 60fps, you have 16.66ms for Chrome to complete every task necessary to display one frame of your webpage, including logic processing, painting, layout, image decoding and compositing – and that’s in an ideal world. Factor in miscellaneous browser processes, and the real figure is probably 8-10ms. Go over that limit, and the user will start to experience jank.

The paint phase is the final step in drawing a web page

What’s magical about the number 60? Well, the frame rates of animations should match the refresh rates of the hardware they are displayed on – which, for most modern devices, is around 60Hz.

Phones usually refresh at a rate of 55-60Hz, laptops at 58-60Hz (although 50Hz in low power mode), while most monitors usually refresh at a rate of 50-62Hz.

What causes jank?

To hit 60fps, we need to look beyond JavaScript as the sole cause of performance bottlenecks, and spend more time investigating paint and layout issues. Some of the core causes of jank include:

  • Long paint times for DOM elements
  • Unnecessary image resizes (because you haven’t pre-scaled the image to the size that you require)
  • Long image-decoding times
  • Unexpected layer invalidations
  • Garbage collector runs
  • Network requests (for example, processing an XHR)
  • Heavy animation or data processing.
  • Input handlers with heavy JavaScript (one common mistake is adding a lot of JavaScript to rearrange the page in an onscroll handler)

Diagnosing slow paint times

Let’s quickly run through what the paint process involves. In the life of a web page the browser generally performs three core tasks: fetching resources, parsing and tokenizing these resources (the HTML/CSS/JS code), and finally drawing things to screen.

During the final task, the browser traverses the render tree – a tree of the visual elements making up the web page – and calls a paint method to display content to the screen. Painting can either be global (against the whole tree) or incremental (partial). The diagram below shows the order in which tasks are completed. It is taken from Tali Garsiel’s How Browsers Work.

The tasks a browser performs when processing a web page

Who should you care about this? Well, it’s important to be aware that the browser has to do a lot of work in order to draw things to the screen. Anything you do to increase the complexity of that task (for example, forcing the browser to recalculate the layout of the page) has the potential to introduce jank. You want to avoid this. So let’s talk about tools that you can use to identify potential bottlenecks.

Introducing the Chrome DevTools Timeline

Chrome DevTools’ Timeline panel provides an overview of all the activity in an application as it runs: for example, processing DOM events, rendering page layouts or painting elements to the screen. It can break this information down in three different ways: by Events, Frames or Memory usage.

For this article, we’re interested in Frames mode, which shows the tasks Chrome had to perform to generate a single frame – that is, a single update to the way the application is presented onscreen.

The Timeline won’t display any data by default, so to begin a recording session, you need to open your app and click on the grey circle at the bottom of the pane (or just use the Cmd/Ctrl+E shortcut). The record button will now turn red, and the Timeline will begin to capture information. If you don’t have a site or app of your own to hand, try http://inception-explained.com as this is currently a site with jank.

Complete a few actions inside your app (for example, scrolling) and after a few seconds, click the button again to stop recording.

The summary view at the bottom of the screen displays horizontal bars representing the time taken by network operations and HTML parsing (blue), JavaScript (yellow), style recalculation and layout (purple) and painting and compositing (green) events for your page. The Records column shows a brief description of each one. Paint events are invoked in response to user inputs that require a visual change to be made to the page, such as resizing a window or scrolling. Recalculate events occur due to modifications of CSS properties; layout events (or reflows) are due to changes in element position.

The Timeline recording session

Hovering over a record will display an extended tooltip with details about the time taken to complete it. Pay attention to these, since they contain a lot of useful information, especially the Call Stack. The Timeline identifies when your app causes a forced asynchronous layout and marks these records with yellow warning icon.

Diagnosing long paint times Last year, Google shared its advice for diagnosing the causes of long paint times. To uncover what styles are slow, Google advised developers to do the following:

  • Navigate to a page and open up the Chrome DevTools.
  • Take a Timeline recording, noting down the paint times.
  • Inspect individual elements, starting with the larger ones more likely to cause significant slow-downs.
  • Either: disable the styles for those elements one at a time, by removing either an individual CSS style or a single style modification, if style is being set via JavaScript.
  • Repeat this process, checking if paint times have gone down. If they have, the last style removed is the culprit, and the others can be added back in.
  • Or: use different styles to try to recreate the overall look of the page in a way that reduces total calculation time.

The process for establishing which elements are slow is similar, only rather than disabling styles, it means setting those parts of the DOM to display:none. This works fairly well – but thankfully, Chrome DevTools now contains some newer features we can use to help to troubleshoot paints and repaints. Before we look at them, let’s review what we mean by a ‘repaint’.

What is a repaint?

Each time a user interacts with a page, only parts of it will be changed: for example, they may perform an action that requires the browser to change the visibility of an element, or add an outline to it. Chrome keeps an eye on which parts of the screen need to be changed, creating a ‘damage rectangle’ around the affected area.

Before making the changes, it saves the rectangle as a bitmap, then only paints the delta between the old rectangle and the new one.

Hovering over a record displays a tooltip with more details

The process of updating the page is known as a repaint. In performance terms, a repaint is an expensive operation, and one that, ideally, you want to avoid. If you notice that there are particular areas of a page that require a lot of repainting, it’s useful to investigate what can be done to reduce this.

Diagnosing long paint times: the new way

Google recently added a couple of new features to Chrome DevTools to make it easier to diagnose the causes of long paint times. These are available in Chrome Canary.

First, a new helper enables you to toggle the visibility:hidden setting on an element. When this style is applied to an element, the browser doesn’t paint that element, but otherwise preserves the layout of the page unchanged. To use the shortcut, select a DOM element in the Elements panel and press H.

Second, the Enable continuous page repainting option in the Settings panel helps identify elements that have a high paint cost. It forces Chrome to repaint the page continuously, providing a counter that shows just how long this is taking. To diagnose what is causing the slowdown, keep your eye on this counter, and use H to toggle individual styles on and off.

Light bars in the Timeline indicate that the CPU was busy

Let’s look at what a workflow for diagnosing paint issues that makes use of these new tools might look like:

  1. Open up your page, launch Chrome DevTools and switch to the Timeline panel. Hit record and interact with your page the same way your user would.
  2. Check the Timeline for any frames that went over budget: that is, that took longer than 16.6ms to calculate. If you’re close to this figure, you’re probably way over budget for mobile devices. Aim to complete all of your work within 10ms to have some margin for error. (If you’re building for mobile – which you should be – you should run this analysis using remote debugging.)
  3. Once you’ve spotted a janky frame, check what caused it. Was it a huge paint operation? A CSS layout issue? Or JavaScript?
  4. If it was a paint or layout issue:
    a) Go to Settings and check Enable continuous page repainting.
    b) Walk through the DOM tree, hiding nonessential elements using the H shortcut. Identify which elements make a big difference to paint times.
    c) Once you know there is something about an element that’s slowing the painting down, uncheck styles that could have an impact on paint time (such as box-shadow) and look at frame rate again.
    d) Continue until you’ve located the style responsible for the slow-down.
  5. 5 Rinse and repeat.

Especially on sites that rely heavily on scrolling, you might discover that your main content is relying on overflow:scroll. This is a real challenge, as this scrolling isn’t GPU-accelerated in any way so the content is repainted whenever your user scrolls. You can work around such issues using normal page scroll (overflow:visible) as well as using position:fixed.

Use console.time() and console.timeEnd() to mark ranges in recordings

Other useful tools

Chrome DevTools also has several other features that can help you to troubleshoot your web apps.

The Rendering section of the Settings panel now includes an option marked Show paint rectangles. Enabling it highlights the part of the screen being repainted in each frame. This provides a simple visual workflow for minimising slow-down: you want to keep the areas being repainted as small as possible.

An older, but equally useful, tool for visualising jank is the real-time FPS meter. Again, you can find this in the Rendering section of the Settings panel: look for the Show FPS meter checkbox. When activated, you will see a dark box in the top-right corner of your page with frame statistics. This can be used during live editing to diagnose what is causing frame rate to drop off without having to switch in and out of the Timeline view.

However, keep in mind that it is easy to miss frames with intermittent jank when using only the FPS meter. You should also note that FPS on desktop differs from that on devices, so be sure to profile performance there too.

The Timeline’s records view lists everything that happened during a recording session

Pro tips for troubleshooting

To round off the article, let’s run through a few tips to make troubleshooting pages quicker as well as easier.

  1. Your JavaScript can annotate DevTools Timeline recordings using console.timeStamp().
  2. Your code can also use console.time() and console.timeEnd() to mark ranges in DevTools Timeline recordings.
  3. If you check Show CPU activity on the ruler in the Timeline section of the Settings panel, you can overlay the CPU activity in your Timeline recordings. Light bars indicate the CPU was busy. If you hover over a CPU bar, this highlights the region during which the CPU was active.
  4. You can drill down to records of a particular type in the Timeline using the Cmd/Ctrl+F shortcut. Just enter the name of a particular record type (for example, scroll) in the search field, and the Timeline will only display the records containing that term.
  5. Transparent bars in the timeline mean one of two things: either your JavaScript on the main thread was busy doing something that DevTools can’t display, or you were bottlenecked by your GPU.
Use Show paint rectangles to see the part of a frame being repainted

In conclusion

Sometimes it’s the small, seemingly insignificant things that can be the biggest performance bottlenecks in your application. Watch your CSS and keep in mind that poor paint times can also result from sub-optimal JavaScript: for example, onscroll handlers firing unnecessarily.

While I don’t suggest you purely focus on paint or layout, it is useful to be aware of the cost of using certain styles in the wrong way.

To learn more about optimising the paint performance of your pages, check out http://jankfree.org and the official Chrome DevTools documentation.

Sony ‘SmartWig’ patent reveals GPS and brainwave monitoring capabilities

Are you both bald AND lost? Then the new “SmartWig” from Japan might be just what you need.

The techno-toupe, which can read the wearer’s brainwaves and direct them to their destination with onboard GPS, is the latest and possibly the wackiest addition to the world of wearable computing.

The country that brought us world-changing hits like the Walkman and the pocket calculator, as well as instantly-forgettable misses like the walking toaster, now offers a hi-tech hairpiece.

The proof-of-concept invention comes in three varieties, each specially designed to make life that little bit easier for the follically challenged.

Wearers of the Presentation Wig will be able to remotely control a laser pointer from their mop-top. They can move forward through a PowerPoint slideshow by tugging the right sideburn and go back a page by pulling on the left.

The Navigation Wig uses GPS to speak to satellites and guide users to their destination with tiny vibrations on different parts of the head.

Meanwhile, the Sensing Wig monitors body temperature, blood pressure and brainwaves and can also record sounds and images to allow wearers to playback their day and see what set their systems aflutter.

“There is a wide variety of wearable computing devices, such as computational glasses, clothes, shoes, and so on. However, most wearable devices have become neither common nor popular,” the developers said in an essay issued last year.

“We think one of the biggest reasons is the style the focus has been function, not style,” said Hiroaki Tobita and Takuya Kuzi.

“The goal of SmartWig is to achieve both natural and practical wearable devices,” they said, adding the “natural appearance” of their invention which can be made from human hair could prove a selling point.

A spokeswoman for Sony said Thursday that patents for the SmartWig had been filed in the European Union and the United States, although there were currently no plans to commercialise the product.

Despite its phenomenal success with the much-aped Walkman, Sony has struggled in recent years in its mainstay electronics business, and has been without a significant global hit.

Sony’s chief executive officer Kazuo Hirai told local media last month he is pouring business resources into the development of wearable devices, which also includes the company’s second-generation smart watch.

Sony’s South Korean rival Samsung Electronics has a similar device while consumer favourite Apple is reportedly developing its own “iWatch“.

Sony SmartWig - Image credit - AFP Photo - Hiroaki Tobita - Sony CSL - Full.jpg

http://gadgets.ndtv.com/others/news/sony-smartwig-patent-reveals-gps-and-brainwave-monitoring-capabilities-452023?pfrom=gadgetsfeatured

New Camera API for Android to add RAW support, face-detection and more

A new report indicates that Google is working on a new camera API, which will enhance the camera experience on an Android smartphone.

Ars Technica in a report has published some changes expected in the new API, including support for RAW image output. As per the report, the RAW images are modestly compressed and processed when compared to a JPEG format, which is a default format for clicked images on Android smartphones. The RAW images would increase the amount of correction possible, and programs like Photoshop can do much more with a RAW file than a JPEG. It’s worth pointing out that Nokia has already introduced the RAW image output support in the flagship Windows Phone 8 phablet, Lumia 1520.

Further, the report reveals a month old batch of code that showed the new camera API was in the works. The code was first spotted by app developer Josh Brown. The code said, “DO NOT MERGE: Hide new camera API. Not yet ready.”

In addition, the alleged new camera API is rumoured to bring face-detection feature which would include bounding boxes around faces and centre coordinates, while Android’s OEM partners like Samsung, Sony and HTC have already introduced the face-detection feature in their top-end smartphones. Another expected addition is a revamped burst mode and a major overhaul to the image quality. The report includes documentation with phrases like substantially improved capabilities and fine-grain control, suggesting that Google is working closely on image details.

The leaked APIs also suggested that Google might bring removable camera support, much like Sony’s Cyber-shot DSC-QX100 and DSC-QX10 lens cameras, to Android smartphones. The report notes the API for removable camera, saying: “The camera device is removable and has been disconnected from the Android device, or the camera service has shut down the connection due to a higher-priority access request for the camera device.” The report does not reveal any details about the release of the new API for Android.

Motorola Moto G dual-SIM model confirmed for India launch

While announcing the Motorola Moto G at an event in Brazil, the Google-owned handset maker confirmed its intentions to get back into the Indian smartphone market next year, in early January.

A tweet by Guy Kawasaki later confirmed that the Motorola Moto G dual-SIM variant will be making it to the Indian market, as well as Brazil. The tweet said, “#MotoG Dual SIM will be available in India and Brasil.” Kawasaki confirmed the news via a question and answer session with Motorola.

While Motorola has confirmed the Moto G will be arriving in India, it has not revealed precise plans for the rollout. We expect it to be sold in India via Motorola’s official online channels, much like the Nexus devices on the Google Play store. There has been no word on Moto G India pricing but considering US pricing of $179 for 8GB model and $199 for 16GB model, it can be expected to be priced around Rs. 12,000 to Rs. 20,000 in India, including various taxes.

motorola-moto-g-rear-panel-635.jpg

The Motorola Moto G runs Android 4.3 out-of-the-box and the Google-owned handset maker also has confirmed that the smartphone will be getting the Android 4.4 KitKat by January 2014.

The Moto G features a 4.5-inch HD ‘edge-to-edge’ display with a resolution of 720×1280 pixels (translating to a pixel density of 329ppi) and boasts a Corning Gorilla Glass 3 screen. Powered by a quad-core 1.2GHz Qualcomm Snapdragon 400 (Cortex-A7) processor coupled with an Adreno 305 GPU, the Moto G features 1GB of RAM. On the optics front, the Moto G sports a 5-megapixel rear camera along with an LED flash and also includes a 1.3-megapixel front-facing camera. The rear camera supports HD (720p) video recording.

The Motorola Moto G also features water-resistant nano-coating on the inside and outside. The Moto G packs a 2070mAh battery, which Motorola claims can deliver up to 30 percent more talktime than the Apple iPhone 5s.

Moto G – Google and Motorola’s new budget smartphone

How to create a volume light effect

To generate volume light, you must use a direct light source. 3ds Max standard directional lights work well – but you can also use V-Ray plane lights by increasing the directional parameter.

Start by adding a target directional light into your scene and position the light source and the target so that the light passes through the opening or window. The target must go beyond the floor or wall so that the volume light continues throughout. Avoid angling the direct light towards the camera otherwise you may end up with a washed out render due to the volume light covering the camera.

The volume light will be contained within the direct light’s hotspot beam and falloff field. If you set the falloff field to be much greater than the hotspot beam, the volume light will start to lose density quite rapidly and fade out the further it travels from the centre of the light. If you want an even distribution of light, it is best to keep the falloff field value close to the hotspot beam value.

Start by adding a target directional light into your scene

By default, 3ds Max standard lights do not have any attenuation applied, so the light has continuous luminosity. This is incorrect. Light should start to lose luminosity by dispersing the further it travels away from the source. Within the decay parameters, set the type to Inverse Square. If the light decays too fast, you can tweak this by adjusting the Start Parameter.

3ds Max standard light multipliers do not behave in the same way that V-Ray lights do. When using Inverse Square falloff, the multiplier must be set to a very high value in order to appear within the scene. A good value to start from is 800, as this roughly equals a standard V-Ray light. The multiplier is also affected by the start decay parameter. The lower the decay, the lower the multiplier needs to be. You may end up setting the light multiplier up in the thousands to get the correct illumination in accordance to the decay.

Under shadow parameters, turn on atmosphere shadows and area shadows. This softens them as the shadow moves further away from the casting object. Increasing the subdivisions here will also improve the shadow quality and reduce noise.

3ds Max standard light multipliers do not behave in the same way that V-Ray lights do

Go to Environment Effects and add a V-Ray Environment Fog to atmosphere effects. Under V-Ray Environment Fog nodes, add the Direct Light. Turn off Use All Lights so the volume light effect is only applied to the lights you choose.

In the general parameters, you can either set the fog colour here or within the Directional Light. You cannot mix the colours, so one must remain white to be inactive. The Fog Distance controls the length the volume light will travel along the direct light, so set this distance to be the light’s entire length.

The fog height also affects the visibility. Therefore this setting must cover the entire height of the light. If the light is positioned 9,000mm above the floor, then this must be your minimum value. A good way to determine the value is to draw a rectangle that covers the height and length of the scene.

In order to get the correct volume light effect, exclude objects so that only the volume light is visible

V-Ray Environment Fog is an atmospheric effect that is calculated during rendering using a brute- force method. Therefore it is important to optimise the settings so that the render times are not too high. The subdivisions parameter controls the noise level. Lower values produce more noise, whereas higher values produce less at the cost of longer render time. Start with a value of 16 and increase in increments of 8 until you are satisfied with the results. Usually 50 subdivisions are adequate, but you may need to go up to 100 depending on the scene.

If the scatter GI parameter is enabled, the volume light will scatter throughout your scene, via Global Illumination illuminating surrounding objects. In addition to just direct light, this will add further realism but it can render very slowly. You may find that after a certain value the results are the same; try setting this to 8 and then 16. If not, then a value of 8 would be adequate.

http://www.creativebloq.com/3ds-max/how-create-volume-light-effect-10135111

iPad Air unveiled

Apple on Tuesday unveiled a slimmer version of its top-selling full-size tablet computer, dubbed the “iPad Air,” along with a revamped iPad Mini with an improved high-definition display.

The new iPad Air is 43 percent thinner than the version it replaces, weighs just one pound (450 grams), and is “screaming fast,” Apple vice president Phil Schiller said at an unveiling.
Read the rest of this entry

Windows 8.1 update glitch stops RT starting up

The latest Windows update is causing problems for owners of Microsoft’s Surface RT gadgets.

The Windows 8.1 update has reportedly meant some of the touchscreen devices will not start up properly.

Microsoft has removed the update from its website while it looks into what has caused the problems.

At the same time, many people are reporting that the 8.1 update for Internet Explorer does not work well with Outlook and some Google services.

Read the rest of this entry

Adobe in source code and customer data security breach

Adobe has confirmed that 2.9 million customers have had private information stolen during a “sophisticated” cyber attack on its website.

The attackers accessed encrypted customer passwords and payment card numbers, the company said.

But it does not believe decrypted debit or credit card data was removed.

Adobe also revealed that it was investigating the “illegal access” of source code for numerous products, including Adobe Acrobat and ColdFusion.

“We deeply regret that this incident occurred,” said Brad Arkin, Adobe’s chief security officer.

Read the rest of this entry

Create a JavaScript bar chart with D3

Scott Murray, author of Interactive Data Visualization for the Web, demonstrates how to visualise data using the browser-based tool D3.js.

D3 (Data-Driven Documents), also known as D3.js, is a powerful tool for creating data visualisation within the browser. It’s a JavaScript library that leverages web standards that you already know to create future-proofed interactive visualisations that don’t rely on browser plug-ins. Start by downloading the code and opening up 00_page_template.html in a current browser. (You may need to view pages served through a web server, for example http://localhost/.)

Selection and creation

The page template provides only a reference to D3. Notice that the ‘body’ of the page is empty.

Open up the JavaScript console, and type in your first line:

d3.select(“body”)

Everything in D3 begins with a selection. You select something first; then you can tell D3 what you want to do with it.

D3’s select() and selectAll() methods both use CSS selector syntax (just like jQuery), so you can quickly identify any DOM element or elements you like. select() selects only the first element found, while selectAll() returns all matching elements.

We’ve selected the body; now let’s add something to it:

d3.select(“body”).append(“p”)

append() creates a new element inside the end of whatever selection you give it. So here, we create a new p paragraph at the end of the body.

Let’s throw some text into that paragraph:

d3.select(“body”).append(“p”).text(“Hello, world!”)

Now you should see “Hello, world!” rendered in the browser.

Well, hello!

All your data are belong to arrays

Switch to 01_data.html for a refresher on storing data in JavaScript. The simplest storage is one value in a single variable:

var value = 5;

Arrays can store multiple values. In JavaScript, arrays are defined with hard brackets:

var arrayOfValues = [ 5, 6, 7, 8, 10, 12, 22 ];

D3 is supremely flexible about data — as long as it’s in an array. Within an array, mix and match as you please. Instead of single values (as above), you could use objects.

Objects store arbitrary key/value pairs, and are defined with curly brackets. Here’s an array of objects:

var arrayOfObjects = [
{ plant: “fern”, color: “green”, number: 23 },
{ plant: “rose”, color: “pink”, number: 7 },
{ plant: “dandelion”, color: “white”, number: 185 }
];

Binding data to elements

I’ll use a straightforward array as our data set:

var dataset = [90, 45, 29, 88, 72, 63, 51, 35, 26, 20];

And I’ll set up some variables for our chart’s dimensions:

var width = 500;
var height = 200;
var barHeight = 20;

This doesn’t look like much yet, but as you can see, we’ve created one rectangle for each value in the data set — 10 in all

The Scalable Vector Graphics image format is amazing because its code is markup, just like HTML.

This simple SVG image contains a square and a circle.

<svg width=”100″ height=”100″>
<rect x=”0″ y=”0″ width=”50″ height=”50″></rect>
<circle cx=”50″ cy=”50″ r=”25″></circle>
</svg>

Styling SVG

Because SVG markup is HTML-compatible, all SVG elements will exist in the DOM.

As a result, they can be styled with CSS and manipulated dynamically with JavaScript.

I dare you to try that with JPGs!

Before we can draw anything, we have to create the SVG element inside which all the visual elements will reside:

var svg = d3.select(“body”).append(“svg”)
.attr(“width”, width)
.attr(“height”, height);

This selects the body and appends a new svg element. Then we use D3’s attr() method to set width and height attributes.

The selection of the new SVG element is passed back into a new variable called svg.

Storing selections this way allows us to reference elements later without having to re-select them.

Finally, brace yourself for D3’s most mind-bending pattern:

svg.selectAll(“rect”)
.data(dataset)
.enter().append(“rect”)
.attr(“x”, 0)
.attr(“y”, 0)
.attr(“width”, width)
.attr(“height”, barHeight);

Ack, what is this? First, within the SVG element, we select all the rect elements. Of course, there are none yet, but we’re about to create them!

Next, we call data(), which binds our data set to the selection. This is the fundamental process of D3: driving documents with data by linking one data value to one element.

For multiple values, we need multiple elements — say, one circle for each number. In this case, since there are more data values than matching DOM elements, data() not only binds the data, but creates an enter selection which represents all the incoming elements that do not yet exist.

Positioning

We use enter() to access the enter selection. Finally, append() fills each empty element with a new rect: these are the elements to which the data values are linked. Finally, several attr() statements set the properties of each new rect.

Open up 02_binding_data.html and inspect the DOM. We see there are 10 rectangles, but they’re all positioned on top of each other.

Here we’ve succeeded in creating a working horizontal bar chart, with 10 bars generated from 10 different data values

This is happening because we set the same x, y, width, and height values for each rect.

To prove that the data is now bound to elements, type d3.selectAll(“rect”) into the console. You’ll see an array of 10 SVG rect elements. Expand each one, and you’ll find a __data__ property, in which lives a data value!

The elements’ values are 90, 45, 29, and so on, just as specified in our original dataset array.

Setting attributes

Let’s rewrite the last four lines of the code above as:

.attr(“x”, 0)
.attr(“y”, function(d, i) {
return i * barHeight;
})
.attr(“width”, function(d) {
return d;
})
.attr(“height”, barHeight – 1);

All bars will be aligned along the left edge, so we can keep x at zero. But the y values must be spaced out to prevent overlap. To calculate dynamic values, we can specify an anonymous function instead of a static value.

Parameters Notice this function takes d and i as parameters, into which D3 will pass the current datum (the current value in the array) and its index position (0, 1, 2…). Although we don’t reference d yet, we must include it as a parameter so i is given the right value.

To get to grips with D3, check out Scott Murray’s forthcoming book Interactive Data Visualization for the Web, published by O’Reilly

Within this function, i * barHeight is calculated and returned as the y value, thereby pushing each successive rect further down the image.

For the width, we take d, the raw data value. And for height, we don’t need a function to calculate a dynamic value, since all bars will be the same height (barHeight – 1).

Now check out 03_setting_attributes.html!

Each bar gets a unique vertical position and a width that corresponds to the original array’s data value: a true visualisation!

Scaling data to pixels

This looks better, but the bars are too short. Our SVG is 500 pixels wide, yet the bars cover only a quarter of that.

Say hello to D3’s scales. Scales are customisable functions that map values from an input domain to an output range. We’ll use a scale to map values from the data’s domain (0 to 90) to pixel values (0 to 500).

To define our first scale:

var xScale = d3.scale.linear()
.domain([0, d3.max(dataset)])
.range([0, width]);

domain() takes an array with two values. The first value is the low end of the domain, while the second is the high values. d3.max() is a quick way to get the largest value from an array (90, in this case).

range() also takes as an array of two values, one low and one high. For us, these are pixel units, so we set them to zero and width.

One last thing: when setting each bar’s width, now we need to wrap d in our new scale function:

.attr(“width”, function(d) {
return xScale(d);
})

Brilliant! The data values have been mapped to visual pixel values, and therefore our bars now automatically scale to fit the image width.

In the screengrab above, the horizontal bars have been scaled to fit the width of the image

One step further

Taking this idea one step further, we can use an ordinal scale for the vertical axis.

This keeps our bars’ spacing and height flexible, so they can scale should our data set change in the future. Ordinal scales expect categories, not linear quantitative values, as input.

In this case, the ‘categories’ will be simply the position of each value in the data set: 0, 1, 2, and so on.

var yScale = d3.scale.ordinal()
.domain(d3.range(dataset.length))
.rangeRoundBands([0, height], 0.05);

d3.range() is a handy sequential integer generator, so the input domain here is set to [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], since dataset.length is 10.

rangeRoundBands() sets a banded output range that lines up nicely with pixel values — just what we need to space out the bars evenly, between 0 and height. The 0.05 tells D3 to add 5% for padding between bands.

D3 can handle more than bar charts, including zoomable geographic data, sortable tables and other interactive elements, like this map

Later, when setting the bars’ y and height values, instead of calculating those values from barHeight, we now reference our new ordinal scale:

.attr(“y”, function(d, i) {
return yScale(i);
})

.attr(“height”, yScale.rangeBand());

To specify basic interactions, bind the event listeners to elements using on():

.on(“click”, function() {
//Do something when this element is clicked
})

on() takes two arguments: first, the name of the DOM event that should trigger the function. This can be any standard JavaScript event. Let’s use mouseover and mouseout to highlight each bar on mouse hover:

.on(“mouseover”, function() {
d3.select(this).classed(“highlight”, true);
})
.on(“mouseout”, function() {
d3.select(this).classed(“highlight”, false);
});

Bar chart with mouse hover interaction. On mouseover, rects are assigned the ‘highlight’ class. On mouseout, the class is removed

Within each anonymous function, this represents “the current element,” so we can select it with d3.select(this). classed() adds or removes a class from any element. If true, the class will be added. If false, it is removed. Finally, we need a CSS style to recolour the bars when the highlight class is applied:

rect.highlight {
fill: purple;
}

(A colour change on hover like this could be achieved with CSS alone, but event listeners are needed for more complex interactions, like transitions.) Open up 05_interactivity.html: you’ve made a simple interactive bar chart!

Words: Scott Murray

Interactive Data Visualization for the Web: An Introduction to Designing with D3 by Scott Murray is available to buy from O’Reilly.

This article originally appeared in .net magazine issue 237. Thanks to Mike Bostock for his peer review of this tutorial

Liked this? Read these!

http://www.creativebloq.com/javascript/create-javascript-bar-chart-d3-9134563

%d bloggers like this: