Monthly Archives: November 2013
Are you both bald AND lost? Then the new “SmartWig” from Japan might be just what you need.
The techno-toupe, which can read the wearer’s brainwaves and direct them to their destination with onboard GPS, is the latest and possibly the wackiest addition to the world of wearable computing.
The country that brought us world-changing hits like the Walkman and the pocket calculator, as well as instantly-forgettable misses like the walking toaster, now offers a hi-tech hairpiece.
The proof-of-concept invention comes in three varieties, each specially designed to make life that little bit easier for the follically challenged.
Wearers of the Presentation Wig will be able to remotely control a laser pointer from their mop-top. They can move forward through a PowerPoint slideshow by tugging the right sideburn and go back a page by pulling on the left.
The Navigation Wig uses GPS to speak to satellites and guide users to their destination with tiny vibrations on different parts of the head.
Meanwhile, the Sensing Wig monitors body temperature, blood pressure and brainwaves and can also record sounds and images to allow wearers to playback their day and see what set their systems aflutter.
“There is a wide variety of wearable computing devices, such as computational glasses, clothes, shoes, and so on. However, most wearable devices have become neither common nor popular,” the developers said in an essay issued last year.
“We think one of the biggest reasons is the style the focus has been function, not style,” said Hiroaki Tobita and Takuya Kuzi.
“The goal of SmartWig is to achieve both natural and practical wearable devices,” they said, adding the “natural appearance” of their invention which can be made from human hair could prove a selling point.
A spokeswoman for Sony said Thursday that patents for the SmartWig had been filed in the European Union and the United States, although there were currently no plans to commercialise the product.
Despite its phenomenal success with the much-aped Walkman, Sony has struggled in recent years in its mainstay electronics business, and has been without a significant global hit.
Sony’s chief executive officer Kazuo Hirai told local media last month he is pouring business resources into the development of wearable devices, which also includes the company’s second-generation smart watch.
One of the most popular media players of all time, Winamp, will cease to be available come December 20, 2013.
AOL has announced that it is shutting down Winamp.com and its associated web services and that the Winamp media player will also no longer be available for download.
“Winamp.com and associated web services will no longer be available past December 20, 2013. Additionally, Winamp Media players will no longer be available for download. Please download the latest version before that date. See release notes for latest improvements to this last release. Thanks for supporting the Winamp community for over 15 years,” said the note on the Winamp.com website.
The Winamp media player was at the peak of its popularity in the late nineties. One of the highlights of the player was the capability to skin its user interface and add plug-ins to extend its functionality.
The player supports all popular file formats including Ogg Vorbis. Winamp also supports gapless playback for MP3 and AAC format files. It was also among the first players to offer streaming music through support for SHOUTCast, Nullsoft’s cross-platform software to stream media.
The first iteration of Winamp was developed by Justin Frankel and Dmitry Boldyrev in 1997 as freeware. Frankel named his software company, Nullsoft, and continued development on the player. It was later turned into shareware with new features and functionality added to the player.
Nullsoft was acquired by AOL in June 1999, for $80 million in stock and since then is a subsidiary of the media giant.
The player was developed essentially for Windows, but betas for Android and Mac OS X and an alpha for Linux were also released, though only the Windows version has been updated regularly. The last version (Winamp 5.66) was released on 20 November 2013, the day the shutdown was announced.
For most people, who used PCs in the 90s the “Winamp, it really whips the llama’s ass,” mp3 file that played on first launch, was synonymous with Winamp, or setting up a PC for the first time. You can download it to relive the era, by visiting Winamp.com.
A new report indicates that Google is working on a new camera API, which will enhance the camera experience on an Android smartphone.
Ars Technica in a report has published some changes expected in the new API, including support for RAW image output. As per the report, the RAW images are modestly compressed and processed when compared to a JPEG format, which is a default format for clicked images on Android smartphones. The RAW images would increase the amount of correction possible, and programs like Photoshop can do much more with a RAW file than a JPEG. It’s worth pointing out that Nokia has already introduced the RAW image output support in the flagship Windows Phone 8 phablet, Lumia 1520.
Further, the report reveals a month old batch of code that showed the new camera API was in the works. The code was first spotted by app developer Josh Brown. The code said, “DO NOT MERGE: Hide new camera API. Not yet ready.”
In addition, the alleged new camera API is rumoured to bring face-detection feature which would include bounding boxes around faces and centre coordinates, while Android’s OEM partners like Samsung, Sony and HTC have already introduced the face-detection feature in their top-end smartphones. Another expected addition is a revamped burst mode and a major overhaul to the image quality. The report includes documentation with phrases like substantially improved capabilities and fine-grain control, suggesting that Google is working closely on image details.
The leaked APIs also suggested that Google might bring removable camera support, much like Sony’s Cyber-shot DSC-QX100 and DSC-QX10 lens cameras, to Android smartphones. The report notes the API for removable camera, saying: “The camera device is removable and has been disconnected from the Android device, or the camera service has shut down the connection due to a higher-priority access request for the camera device.” The report does not reveal any details about the release of the new API for Android.
While announcing the Motorola Moto G at an event in Brazil, the Google-owned handset maker confirmed its intentions to get back into the Indian smartphone market next year, in early January.
A tweet by Guy Kawasaki later confirmed that the Motorola Moto G dual-SIM variant will be making it to the Indian market, as well as Brazil. The tweet said, “#MotoG Dual SIM will be available in India and Brasil.” Kawasaki confirmed the news via a question and answer session with Motorola.
While Motorola has confirmed the Moto G will be arriving in India, it has not revealed precise plans for the rollout. We expect it to be sold in India via Motorola’s official online channels, much like the Nexus devices on the Google Play store. There has been no word on Moto G India pricing but considering US pricing of $179 for 8GB model and $199 for 16GB model, it can be expected to be priced around Rs. 12,000 to Rs. 20,000 in India, including various taxes.
The Motorola Moto G runs Android 4.3 out-of-the-box and the Google-owned handset maker also has confirmed that the smartphone will be getting the Android 4.4 KitKat by January 2014.
The Moto G features a 4.5-inch HD ‘edge-to-edge’ display with a resolution of 720×1280 pixels (translating to a pixel density of 329ppi) and boasts a Corning Gorilla Glass 3 screen. Powered by a quad-core 1.2GHz Qualcomm Snapdragon 400 (Cortex-A7) processor coupled with an Adreno 305 GPU, the Moto G features 1GB of RAM. On the optics front, the Moto G sports a 5-megapixel rear camera along with an LED flash and also includes a 1.3-megapixel front-facing camera. The rear camera supports HD (720p) video recording.
The Motorola Moto G also features water-resistant nano-coating on the inside and outside. The Moto G packs a 2070mAh battery, which Motorola claims can deliver up to 30 percent more talktime than the Apple iPhone 5s.
Moto G – Google and Motorola’s new budget smartphone
To generate volume light, you must use a direct light source. 3ds Max standard directional lights work well – but you can also use V-Ray plane lights by increasing the directional parameter.
Start by adding a target directional light into your scene and position the light source and the target so that the light passes through the opening or window. The target must go beyond the floor or wall so that the volume light continues throughout. Avoid angling the direct light towards the camera otherwise you may end up with a washed out render due to the volume light covering the camera.
The volume light will be contained within the direct light’s hotspot beam and falloff field. If you set the falloff field to be much greater than the hotspot beam, the volume light will start to lose density quite rapidly and fade out the further it travels from the centre of the light. If you want an even distribution of light, it is best to keep the falloff field value close to the hotspot beam value.
By default, 3ds Max standard lights do not have any attenuation applied, so the light has continuous luminosity. This is incorrect. Light should start to lose luminosity by dispersing the further it travels away from the source. Within the decay parameters, set the type to Inverse Square. If the light decays too fast, you can tweak this by adjusting the Start Parameter.
3ds Max standard light multipliers do not behave in the same way that V-Ray lights do. When using Inverse Square falloff, the multiplier must be set to a very high value in order to appear within the scene. A good value to start from is 800, as this roughly equals a standard V-Ray light. The multiplier is also affected by the start decay parameter. The lower the decay, the lower the multiplier needs to be. You may end up setting the light multiplier up in the thousands to get the correct illumination in accordance to the decay.
Under shadow parameters, turn on atmosphere shadows and area shadows. This softens them as the shadow moves further away from the casting object. Increasing the subdivisions here will also improve the shadow quality and reduce noise.
Go to Environment Effects and add a V-Ray Environment Fog to atmosphere effects. Under V-Ray Environment Fog nodes, add the Direct Light. Turn off Use All Lights so the volume light effect is only applied to the lights you choose.
In the general parameters, you can either set the fog colour here or within the Directional Light. You cannot mix the colours, so one must remain white to be inactive. The Fog Distance controls the length the volume light will travel along the direct light, so set this distance to be the light’s entire length.
The fog height also affects the visibility. Therefore this setting must cover the entire height of the light. If the light is positioned 9,000mm above the floor, then this must be your minimum value. A good way to determine the value is to draw a rectangle that covers the height and length of the scene.
V-Ray Environment Fog is an atmospheric effect that is calculated during rendering using a brute- force method. Therefore it is important to optimise the settings so that the render times are not too high. The subdivisions parameter controls the noise level. Lower values produce more noise, whereas higher values produce less at the cost of longer render time. Start with a value of 16 and increase in increments of 8 until you are satisfied with the results. Usually 50 subdivisions are adequate, but you may need to go up to 100 depending on the scene.
If the scatter GI parameter is enabled, the volume light will scatter throughout your scene, via Global Illumination illuminating surrounding objects. In addition to just direct light, this will add further realism but it can render very slowly. You may find that after a certain value the results are the same; try setting this to 8 and then 16. If not, then a value of 8 would be adequate.
The Sass extend directive can improve your workflow. Nick Walsh explains how to implement the CSS preprocessor component without bloat.
Watching a CSS veteran discover Sass for the first time is always entertaining. Concepts like nesting, variables and mixins seem so natural and, once the syntax is committed to memory, it’s difficult to remember a time before their availability. Relief is the most common emotion: a recognition that the days of manually prefixing and copying hexadecimal values have passed.
Of the core concepts found in the preprocessor @extend stands out for three reasons: it has the highest potential to drastically change your workflow; misuse is dangerous to the health of your stylesheet; and newcomers struggle with it far more than other Sass functionality. Follow the accompanying patterns to start safely utilising @extend.