Google is laying the groundwork for a version of Android that would be built directly into cars, sources said, allowing drivers to enjoy all the benefits of the internet without even plugging in their smartphones.
The move is a major step up from Google’s current Android Auto software, which comes with the latest version of its smartphone operating system and requires a phone to be plugged into a compatible car with a built-in screen to access streaming music, maps and other apps.
Google, however, has never provided details or a timeframe for its long-term plan to put Android Auto directly into cars. The company now plans to do so when it rolls out the next version of its operating system, dubbed Android M, expected in a year or so, two people with knowledge of the matter said.
The sources declined to be identified because they were not authorized to discuss the plans publicly.
“It provides a much stronger foothold for Google to really be part of the vehicle rather than being an add-on,” said Thilo Koslowski, vice president and Automotive Practice Leader of industry research firm Gartner, who noted that he was unaware of Google’s latest plans in this area.
If successful, Android would become the standard system powering a car’s entertainment and navigation features, solidifying Google’s position in a new market where it is competing with arch-rival Apple Inc. Google could also potentially access the valuable trove of data collected by a vehicle.
Direct integration into cars ensures that drivers will use Google’s services every time they turn on the ignition, without having to plug in the phone. It could allow Google to make more use of a car’s camera, sensors, fuel gauge, and Internet connections that come with some newer car models.
Analysts said Google’s plan could face various technical and business challenges, including convincing automakers to integrate its services so tightly into their vehicles.
Google declined to comment.
Technology companies are racing to design appliances, wristwatches and other gadgets that connect to the Internet. Automobiles are a particularly attractive prospect because Americans spend nearly 50 minutes per day on average on their commute, according to US Census data.
Apple unveiled its CarPlay software in March and Google has signed on dozens of companies, including Hyundai, General Motors Co and Nissan Motor Co, for its Open Automotive Alliance and its Android Auto product.
Android Auto and CarPlay both currently “project” their smartphone apps onto the car’s screen. Many of the first compatible cars with this smartphone plug-in functionality are expected to be on display at the upcoming Consumer Electronics Show in Las Vegas next month and to go on sale in 2015.
By building Android into a car, Google’s services would not be at risk of switching off when a smartphone battery runs out of power, for example.
“With embedded it’s always on, always there,” said one of the sources, referring to the built-in version of Android Auto. “You don’t have to depend on your phone being there and on.”
Google’s software could potentially connect to other car components, allowing, for example, a built-in navigation system like Google Maps to detect when fuel is low and provide directions to the nearest gas stations.
By tapping into the car’s components, Google could also gain valuable information to feed its data-hungry advertising business model. “You can get access to GPS location, where you stop, where you travel everyday, your speed, your fuel level, where you stop for gas,” one of the sources said.
But the source noted that Android would need major improvements in performance and stability for carmakers to adopt it. In particular, Android Auto would need to power-up instantly when the driver turns the car on, instead of having to wait more than 30 seconds, as happens with many smartphones.
Automakers might also be wary of giving Google access to in-car components that could raise safety and liability concerns, and be reluctant to give Google such a prime spot in their vehicles.
“Automakers want to keep their brand appeal and keep their differentiation,” said Mark Boyadjis, an analyst with industry research firm IHS Automotive. “Automakers don’t want to have a state of the industry where you get in any vehicle and it’s just the same experience wherever you go.”
NEW YORK: Facebook Inc will buy fast-growing mobile-messaging startup WhatsApp for $19 billion in cash and stock, as the world’s largest social network looks for ways to boost its popularity, especially among a younger crowd.
The acquisition of the hot messaging service with more than 450 million users around the world stunned many Silicon Valley observers with its lofty price tag.
But it underscores Facebook’s determination to win the market for messaging, an indispensable utility in a mobile era.
Combining text messaging and social networking, messaging apps provide a quick way for smartphone users to trade everything from brief texts to flirtatious pictures to YouTube clips – bypassing the need to pay wireless carriers for messaging services.
And it helps Facebook tap teens who will eschew the mainstream social networks and prefer WhatsApp and rivals such as Line and WeChat, which have exploded in size as mobile messaging takes off.
“People are calling them ‘Facebook Nevers,'” said Jeremy Liew, a partner at Lightspeed and an early investor in Snapchat.
WhatsApp is adding about a million users per day, Facebook co-founder and chief executive officer Mark Zuckerberg said on his page on Wednesday.
“WhatsApp will complement our existing chat and messaging services to provide new tools for our community,” he wrote on his Facebook page. “Since WhatsApp and (Facebook) Messenger serve such different and important users, we will continue investing in both.”
Smartphone-based messaging apps are now sweeping across North America, Asia and Europe.
“Communication is the one thing that you have to use daily, and it has a strong network effect,” said Jonathan Teo, an early investor in Snapchat, another red-hot messaging company that flirted year ago with a multibillion dollar acquisition offer from Facebook.
“Facebook is more about content and has not yet fully figured out communication.”
Even so, he balked at the price tag.
As part of the deal, WhatsApp co-founder and chief executive officer Jan Koum will join Facebook’s board, and the social network will grant an additional $3 billion worth of restricted stock units to WhatsApp’s founders, including Koum.
That is on top of the $16 billion in cash and stock that Facebook will pay.
“Goodness gracious, it’s a good deal for WhatsApp,” Teo said.
Shares in Facebook slid 5 percent to $64.70 after hours, from a close of $68.06 on the Nasdaq.
Facebook said on Wednesday it will pay $4 billion in cash and about $12 billion in stock in its single largest acquisition, dwarfing the $1 billion it paid for photo-sharing app Instagram.
The price paid for Instagram, which with just 30 million users was already considered overvalued by many observers at the time.
Facebook promised to keep the WhatsApp brand and service, and pledged a $1 billion cash break-up fee if the deal falls through.
Facebook was advised by Allen & Co, while WhatsApp has enlisted Morgan Stanley for the deal.
Visual jitter ruins the experience of navigating a site. Addy Osmani reveals how to use Chrome DevTools to get your designs running at a steady 60fps.
Whether it’s on desktop or mobile, users want their web experience to be snappy, smooth and delightful. Even if the browser is busy rendering the page or loading in content, the user should still be able to scroll around and interact with it without any slow-down. No one likes seeing visual glitches.
Low or inconsistent frame rates affect not only user experience but user engagement: something that large sites like Flickr are increasingly starting to address. In this article, we will explore how to apply the lessons they have learned to your own sites.
Measurement is the most important part of any performance-profiling work. This article focuses on how to do this within Chrome DevTools. However, always test your sites and apps using the tools in other browsers to check if any issues are browser-specific.
What is jank?
The human eye perceives a continuous stream of information. It does not naturally see motion as a series of frames. In the worlds of animation, film and gaming, using a series of still frames to simulate motion creates some interesting perceptual artifacts – especially if those frames are played back too slowly, or at an inconsistent rate. When the frame rate varies, movements can look jerky, and images can appear to jitter.
For an optimal user experience, animations must be silky, scrolling must be buttery-smooth, and your page must contain little or no ‘jank’ – visual disruption caused by variation in frame rate.
On the web, a low frame rate (or a janky experience) means that the human eye can make out individual frames. Giving users a jank-free experience often comes down to creating sites and applications that can run at a steady 60fps, similar to videogames.
At 60fps, you have 16.66ms for Chrome to complete every task necessary to display one frame of your webpage, including logic processing, painting, layout, image decoding and compositing – and that’s in an ideal world. Factor in miscellaneous browser processes, and the real figure is probably 8-10ms. Go over that limit, and the user will start to experience jank.
What’s magical about the number 60? Well, the frame rates of animations should match the refresh rates of the hardware they are displayed on – which, for most modern devices, is around 60Hz.
Phones usually refresh at a rate of 55-60Hz, laptops at 58-60Hz (although 50Hz in low power mode), while most monitors usually refresh at a rate of 50-62Hz.
What causes jank?
- Long paint times for DOM elements
- Unnecessary image resizes (because you haven’t pre-scaled the image to the size that you require)
- Long image-decoding times
- Unexpected layer invalidations
- Garbage collector runs
- Network requests (for example, processing an XHR)
- Heavy animation or data processing.
Diagnosing slow paint times
Let’s quickly run through what the paint process involves. In the life of a web page the browser generally performs three core tasks: fetching resources, parsing and tokenizing these resources (the HTML/CSS/JS code), and finally drawing things to screen.
During the final task, the browser traverses the render tree – a tree of the visual elements making up the web page – and calls a paint method to display content to the screen. Painting can either be global (against the whole tree) or incremental (partial). The diagram below shows the order in which tasks are completed. It is taken from Tali Garsiel’s How Browsers Work.
Who should you care about this? Well, it’s important to be aware that the browser has to do a lot of work in order to draw things to the screen. Anything you do to increase the complexity of that task (for example, forcing the browser to recalculate the layout of the page) has the potential to introduce jank. You want to avoid this. So let’s talk about tools that you can use to identify potential bottlenecks.
Introducing the Chrome DevTools Timeline
Chrome DevTools’ Timeline panel provides an overview of all the activity in an application as it runs: for example, processing DOM events, rendering page layouts or painting elements to the screen. It can break this information down in three different ways: by Events, Frames or Memory usage.
For this article, we’re interested in Frames mode, which shows the tasks Chrome had to perform to generate a single frame – that is, a single update to the way the application is presented onscreen.
The Timeline won’t display any data by default, so to begin a recording session, you need to open your app and click on the grey circle at the bottom of the pane (or just use the Cmd/Ctrl+E shortcut). The record button will now turn red, and the Timeline will begin to capture information. If you don’t have a site or app of your own to hand, try http://inception-explained.com as this is currently a site with jank.
Complete a few actions inside your app (for example, scrolling) and after a few seconds, click the button again to stop recording.
Hovering over a record will display an extended tooltip with details about the time taken to complete it. Pay attention to these, since they contain a lot of useful information, especially the Call Stack. The Timeline identifies when your app causes a forced asynchronous layout and marks these records with yellow warning icon.
Diagnosing long paint times Last year, Google shared its advice for diagnosing the causes of long paint times. To uncover what styles are slow, Google advised developers to do the following:
- Navigate to a page and open up the Chrome DevTools.
- Take a Timeline recording, noting down the paint times.
- Inspect individual elements, starting with the larger ones more likely to cause significant slow-downs.
- Repeat this process, checking if paint times have gone down. If they have, the last style removed is the culprit, and the others can be added back in.
- Or: use different styles to try to recreate the overall look of the page in a way that reduces total calculation time.
The process for establishing which elements are slow is similar, only rather than disabling styles, it means setting those parts of the DOM to display:none. This works fairly well – but thankfully, Chrome DevTools now contains some newer features we can use to help to troubleshoot paints and repaints. Before we look at them, let’s review what we mean by a ‘repaint’.
What is a repaint?
Each time a user interacts with a page, only parts of it will be changed: for example, they may perform an action that requires the browser to change the visibility of an element, or add an outline to it. Chrome keeps an eye on which parts of the screen need to be changed, creating a ‘damage rectangle’ around the affected area.
Before making the changes, it saves the rectangle as a bitmap, then only paints the delta between the old rectangle and the new one.
The process of updating the page is known as a repaint. In performance terms, a repaint is an expensive operation, and one that, ideally, you want to avoid. If you notice that there are particular areas of a page that require a lot of repainting, it’s useful to investigate what can be done to reduce this.
Diagnosing long paint times: the new way
Google recently added a couple of new features to Chrome DevTools to make it easier to diagnose the causes of long paint times. These are available in Chrome Canary.
First, a new helper enables you to toggle the visibility:hidden setting on an element. When this style is applied to an element, the browser doesn’t paint that element, but otherwise preserves the layout of the page unchanged. To use the shortcut, select a DOM element in the Elements panel and press H.
Second, the Enable continuous page repainting option in the Settings panel helps identify elements that have a high paint cost. It forces Chrome to repaint the page continuously, providing a counter that shows just how long this is taking. To diagnose what is causing the slowdown, keep your eye on this counter, and use H to toggle individual styles on and off.
Let’s look at what a workflow for diagnosing paint issues that makes use of these new tools might look like:
- Open up your page, launch Chrome DevTools and switch to the Timeline panel. Hit record and interact with your page the same way your user would.
- Check the Timeline for any frames that went over budget: that is, that took longer than 16.6ms to calculate. If you’re close to this figure, you’re probably way over budget for mobile devices. Aim to complete all of your work within 10ms to have some margin for error. (If you’re building for mobile – which you should be – you should run this analysis using remote debugging.)
- If it was a paint or layout issue:
a) Go to Settings and check Enable continuous page repainting.
b) Walk through the DOM tree, hiding nonessential elements using the H shortcut. Identify which elements make a big difference to paint times.
c) Once you know there is something about an element that’s slowing the painting down, uncheck styles that could have an impact on paint time (such as box-shadow) and look at frame rate again.
d) Continue until you’ve located the style responsible for the slow-down.
- 5 Rinse and repeat.
Especially on sites that rely heavily on scrolling, you might discover that your main content is relying on overflow:scroll. This is a real challenge, as this scrolling isn’t GPU-accelerated in any way so the content is repainted whenever your user scrolls. You can work around such issues using normal page scroll (overflow:visible) as well as using position:fixed.
Other useful tools
Chrome DevTools also has several other features that can help you to troubleshoot your web apps.
The Rendering section of the Settings panel now includes an option marked Show paint rectangles. Enabling it highlights the part of the screen being repainted in each frame. This provides a simple visual workflow for minimising slow-down: you want to keep the areas being repainted as small as possible.
An older, but equally useful, tool for visualising jank is the real-time FPS meter. Again, you can find this in the Rendering section of the Settings panel: look for the Show FPS meter checkbox. When activated, you will see a dark box in the top-right corner of your page with frame statistics. This can be used during live editing to diagnose what is causing frame rate to drop off without having to switch in and out of the Timeline view.
However, keep in mind that it is easy to miss frames with intermittent jank when using only the FPS meter. You should also note that FPS on desktop differs from that on devices, so be sure to profile performance there too.
Pro tips for troubleshooting
To round off the article, let’s run through a few tips to make troubleshooting pages quicker as well as easier.
- Your code can also use console.time() and console.timeEnd() to mark ranges in DevTools Timeline recordings.
- If you check Show CPU activity on the ruler in the Timeline section of the Settings panel, you can overlay the CPU activity in your Timeline recordings. Light bars indicate the CPU was busy. If you hover over a CPU bar, this highlights the region during which the CPU was active.
- You can drill down to records of a particular type in the Timeline using the Cmd/Ctrl+F shortcut. Just enter the name of a particular record type (for example, scroll) in the search field, and the Timeline will only display the records containing that term.
While I don’t suggest you purely focus on paint or layout, it is useful to be aware of the cost of using certain styles in the wrong way.
Are you both bald AND lost? Then the new “SmartWig” from Japan might be just what you need.
The techno-toupe, which can read the wearer’s brainwaves and direct them to their destination with onboard GPS, is the latest and possibly the wackiest addition to the world of wearable computing.
The country that brought us world-changing hits like the Walkman and the pocket calculator, as well as instantly-forgettable misses like the walking toaster, now offers a hi-tech hairpiece.
The proof-of-concept invention comes in three varieties, each specially designed to make life that little bit easier for the follically challenged.
Wearers of the Presentation Wig will be able to remotely control a laser pointer from their mop-top. They can move forward through a PowerPoint slideshow by tugging the right sideburn and go back a page by pulling on the left.
The Navigation Wig uses GPS to speak to satellites and guide users to their destination with tiny vibrations on different parts of the head.
Meanwhile, the Sensing Wig monitors body temperature, blood pressure and brainwaves and can also record sounds and images to allow wearers to playback their day and see what set their systems aflutter.
“There is a wide variety of wearable computing devices, such as computational glasses, clothes, shoes, and so on. However, most wearable devices have become neither common nor popular,” the developers said in an essay issued last year.
“We think one of the biggest reasons is the style the focus has been function, not style,” said Hiroaki Tobita and Takuya Kuzi.
“The goal of SmartWig is to achieve both natural and practical wearable devices,” they said, adding the “natural appearance” of their invention which can be made from human hair could prove a selling point.
A spokeswoman for Sony said Thursday that patents for the SmartWig had been filed in the European Union and the United States, although there were currently no plans to commercialise the product.
Despite its phenomenal success with the much-aped Walkman, Sony has struggled in recent years in its mainstay electronics business, and has been without a significant global hit.
Sony’s chief executive officer Kazuo Hirai told local media last month he is pouring business resources into the development of wearable devices, which also includes the company’s second-generation smart watch.
One of the most popular media players of all time, Winamp, will cease to be available come December 20, 2013.
AOL has announced that it is shutting down Winamp.com and its associated web services and that the Winamp media player will also no longer be available for download.
“Winamp.com and associated web services will no longer be available past December 20, 2013. Additionally, Winamp Media players will no longer be available for download. Please download the latest version before that date. See release notes for latest improvements to this last release. Thanks for supporting the Winamp community for over 15 years,” said the note on the Winamp.com website.
The Winamp media player was at the peak of its popularity in the late nineties. One of the highlights of the player was the capability to skin its user interface and add plug-ins to extend its functionality.
The player supports all popular file formats including Ogg Vorbis. Winamp also supports gapless playback for MP3 and AAC format files. It was also among the first players to offer streaming music through support for SHOUTCast, Nullsoft’s cross-platform software to stream media.
The first iteration of Winamp was developed by Justin Frankel and Dmitry Boldyrev in 1997 as freeware. Frankel named his software company, Nullsoft, and continued development on the player. It was later turned into shareware with new features and functionality added to the player.
Nullsoft was acquired by AOL in June 1999, for $80 million in stock and since then is a subsidiary of the media giant.
The player was developed essentially for Windows, but betas for Android and Mac OS X and an alpha for Linux were also released, though only the Windows version has been updated regularly. The last version (Winamp 5.66) was released on 20 November 2013, the day the shutdown was announced.
For most people, who used PCs in the 90s the “Winamp, it really whips the llama’s ass,” mp3 file that played on first launch, was synonymous with Winamp, or setting up a PC for the first time. You can download it to relive the era, by visiting Winamp.com.
A new report indicates that Google is working on a new camera API, which will enhance the camera experience on an Android smartphone.
Ars Technica in a report has published some changes expected in the new API, including support for RAW image output. As per the report, the RAW images are modestly compressed and processed when compared to a JPEG format, which is a default format for clicked images on Android smartphones. The RAW images would increase the amount of correction possible, and programs like Photoshop can do much more with a RAW file than a JPEG. It’s worth pointing out that Nokia has already introduced the RAW image output support in the flagship Windows Phone 8 phablet, Lumia 1520.
Further, the report reveals a month old batch of code that showed the new camera API was in the works. The code was first spotted by app developer Josh Brown. The code said, “DO NOT MERGE: Hide new camera API. Not yet ready.”
In addition, the alleged new camera API is rumoured to bring face-detection feature which would include bounding boxes around faces and centre coordinates, while Android’s OEM partners like Samsung, Sony and HTC have already introduced the face-detection feature in their top-end smartphones. Another expected addition is a revamped burst mode and a major overhaul to the image quality. The report includes documentation with phrases like substantially improved capabilities and fine-grain control, suggesting that Google is working closely on image details.
The leaked APIs also suggested that Google might bring removable camera support, much like Sony’s Cyber-shot DSC-QX100 and DSC-QX10 lens cameras, to Android smartphones. The report notes the API for removable camera, saying: “The camera device is removable and has been disconnected from the Android device, or the camera service has shut down the connection due to a higher-priority access request for the camera device.” The report does not reveal any details about the release of the new API for Android.
While announcing the Motorola Moto G at an event in Brazil, the Google-owned handset maker confirmed its intentions to get back into the Indian smartphone market next year, in early January.
A tweet by Guy Kawasaki later confirmed that the Motorola Moto G dual-SIM variant will be making it to the Indian market, as well as Brazil. The tweet said, “#MotoG Dual SIM will be available in India and Brasil.” Kawasaki confirmed the news via a question and answer session with Motorola.
While Motorola has confirmed the Moto G will be arriving in India, it has not revealed precise plans for the rollout. We expect it to be sold in India via Motorola’s official online channels, much like the Nexus devices on the Google Play store. There has been no word on Moto G India pricing but considering US pricing of $179 for 8GB model and $199 for 16GB model, it can be expected to be priced around Rs. 12,000 to Rs. 20,000 in India, including various taxes.
The Motorola Moto G runs Android 4.3 out-of-the-box and the Google-owned handset maker also has confirmed that the smartphone will be getting the Android 4.4 KitKat by January 2014.
The Moto G features a 4.5-inch HD ‘edge-to-edge’ display with a resolution of 720×1280 pixels (translating to a pixel density of 329ppi) and boasts a Corning Gorilla Glass 3 screen. Powered by a quad-core 1.2GHz Qualcomm Snapdragon 400 (Cortex-A7) processor coupled with an Adreno 305 GPU, the Moto G features 1GB of RAM. On the optics front, the Moto G sports a 5-megapixel rear camera along with an LED flash and also includes a 1.3-megapixel front-facing camera. The rear camera supports HD (720p) video recording.
The Motorola Moto G also features water-resistant nano-coating on the inside and outside. The Moto G packs a 2070mAh battery, which Motorola claims can deliver up to 30 percent more talktime than the Apple iPhone 5s.
Moto G – Google and Motorola’s new budget smartphone
To generate volume light, you must use a direct light source. 3ds Max standard directional lights work well – but you can also use V-Ray plane lights by increasing the directional parameter.
Start by adding a target directional light into your scene and position the light source and the target so that the light passes through the opening or window. The target must go beyond the floor or wall so that the volume light continues throughout. Avoid angling the direct light towards the camera otherwise you may end up with a washed out render due to the volume light covering the camera.
The volume light will be contained within the direct light’s hotspot beam and falloff field. If you set the falloff field to be much greater than the hotspot beam, the volume light will start to lose density quite rapidly and fade out the further it travels from the centre of the light. If you want an even distribution of light, it is best to keep the falloff field value close to the hotspot beam value.
By default, 3ds Max standard lights do not have any attenuation applied, so the light has continuous luminosity. This is incorrect. Light should start to lose luminosity by dispersing the further it travels away from the source. Within the decay parameters, set the type to Inverse Square. If the light decays too fast, you can tweak this by adjusting the Start Parameter.
3ds Max standard light multipliers do not behave in the same way that V-Ray lights do. When using Inverse Square falloff, the multiplier must be set to a very high value in order to appear within the scene. A good value to start from is 800, as this roughly equals a standard V-Ray light. The multiplier is also affected by the start decay parameter. The lower the decay, the lower the multiplier needs to be. You may end up setting the light multiplier up in the thousands to get the correct illumination in accordance to the decay.
Under shadow parameters, turn on atmosphere shadows and area shadows. This softens them as the shadow moves further away from the casting object. Increasing the subdivisions here will also improve the shadow quality and reduce noise.
Go to Environment Effects and add a V-Ray Environment Fog to atmosphere effects. Under V-Ray Environment Fog nodes, add the Direct Light. Turn off Use All Lights so the volume light effect is only applied to the lights you choose.
In the general parameters, you can either set the fog colour here or within the Directional Light. You cannot mix the colours, so one must remain white to be inactive. The Fog Distance controls the length the volume light will travel along the direct light, so set this distance to be the light’s entire length.
The fog height also affects the visibility. Therefore this setting must cover the entire height of the light. If the light is positioned 9,000mm above the floor, then this must be your minimum value. A good way to determine the value is to draw a rectangle that covers the height and length of the scene.
V-Ray Environment Fog is an atmospheric effect that is calculated during rendering using a brute- force method. Therefore it is important to optimise the settings so that the render times are not too high. The subdivisions parameter controls the noise level. Lower values produce more noise, whereas higher values produce less at the cost of longer render time. Start with a value of 16 and increase in increments of 8 until you are satisfied with the results. Usually 50 subdivisions are adequate, but you may need to go up to 100 depending on the scene.
If the scatter GI parameter is enabled, the volume light will scatter throughout your scene, via Global Illumination illuminating surrounding objects. In addition to just direct light, this will add further realism but it can render very slowly. You may find that after a certain value the results are the same; try setting this to 8 and then 16. If not, then a value of 8 would be adequate.
The Sass extend directive can improve your workflow. Nick Walsh explains how to implement the CSS preprocessor component without bloat.
Watching a CSS veteran discover Sass for the first time is always entertaining. Concepts like nesting, variables and mixins seem so natural and, once the syntax is committed to memory, it’s difficult to remember a time before their availability. Relief is the most common emotion: a recognition that the days of manually prefixing and copying hexadecimal values have passed.
Of the core concepts found in the preprocessor @extend stands out for three reasons: it has the highest potential to drastically change your workflow; misuse is dangerous to the health of your stylesheet; and newcomers struggle with it far more than other Sass functionality. Follow the accompanying patterns to start safely utilising @extend.
Twitter has unveiled the price range for its shares when the company lists on the stock exchange.
In a filing on Thursday, Twitter said it planned to sell 70 million shares priced between $17 and $20 (£10 – £12) to raise up to $1.4bn (£865m).
The offering represents 13% of Twitter and values it at as much as $11bn.
Analysts said the valuation, which was less than forecast, indicated the firm wanted to avoid the dip in prices that followed Facebook’s listing.