Saturday, 8 February 2020

The Future On Your Way: Self Driving Car

    The Future On Your Way: Self Driving Car

    

What is Self-Driving Car?
          A Self-driving vehicles (cars, bikes and trucks) in which human drivers are never required to control the vehicle or that work without human intervention to predetermined destination over roads and this known as Smart or “driver less” Vehicles. It is combination of sensors, Camera, Navigation System, Artificial Intelligence (AI) and software to control and drive the vehicle Safely.




Companies that are developing the driver-less car.

Many major automotive manufactures including Ford, Mercedes Benz, General Motors, Volkswagen, Audi, Toyota, Volvo, BMW and Nissan are in the process of testing driver-less car system. BMW has been testing driver-less systems since around 2005.


SAE International Releases Updated Visual Chart for Its “Levels of Driving Automation” Standard for Self-Driving Vehicles:


which range from Level 0 to 5. Let’s take a brief look at each stage. 
  • Level 0 does not feature any self-driving tech at all. 
  • Level 1 cars offer at least one system that helps the driver brake, steer, or accelerate, but if there are multiple systems, they are not capable of communicating with each other. 
  • Level 2 cars can simultaneously control steering and speed, even if the driver is not driving, for short periods of time. Think lane-centring technology combined with advanced cruise control, as an example. 
  • Level 3 vehicles are fully autonomous but require driver attention. These cars aren’t yet available but are being tested by some tech start-ups.
  • Level 4 cars, once programmed to a destination, will not need driver input, but the controls are available should the driver wish to intervene.
  • Level 5 cars will be fully autonomous without any driver input.





Core Technologies Used in Self Driving Cars.

Cameras

Cameras used in self-driving cars have the highest resolution of any sensor. The data processed by cameras and computer vision software can help identify edge-case scenarios and detailed information of the car’s surroundings. 
All Tesla vehicles with autopilot capabilities, for example, have 8 external facing cameras which help them understand the world around their cars and train their models for future scenarios. 

Unfortunately, cameras don’t work as well when visibility is low, such as in a storm, fog or even dense smog. Thankfully self-driving cars have been built with redundant systems to fall back on when one or more systems aren’t functioning properly. 

ADAS System

ADAS stands for Advanced Driver Assistance System and it's a technology which has been developed for safer driving. Originally designed for driver-less vehicles, it's a system which uses cameras and sensors and a sophisticated algorithm, to notify the driver of a potential problem. This might be a weaving cyclist, a stopped vehicle in the road, drifting across lanes, or sudden braking of the vehicle in front. All these hazards can be instantly communicated to the driver, allowing them to take the necessary action avoid accident.


GPS
However, an image of the surroundings is not enough to drive safely around the streets of a city. A GPS system is also present to help the car position and navigate itself. But the accuracy of commonly available GPS is about 4 meter RMS.





LiDAR and RADAR

Therefore, in order to improve the accuracy of navigation, it also uses a set of sensors such as Light Detection and Ranging, commonly referred to as LiDAR. A LiDAR works by measuring distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths are then used to make digital 3-D representations of the target. A LiDAR can give an accuracy of up-to 2.5 cm. Multiple LiDAR modules throughout the body of the car help in creating an accurate map of the entire surroundings and avoiding blind spots. LiDAR and RADAR play an important role in collision avoidance as well. 



LiDAR can detect micro-topography that is hidden by vegetation which helps archaeologist to understand the surface. Ground-based LiDAR technology can be used to capture the structure of the building. This digital information can be used for 3D mapping on the ground which can be used to create models of the structure.

Other Sensors 

Self-driving cars will also utilise traditional GPS tracking, along with ultrasonic sensor and inertial sensors to gain a full picture of what the car is doing as well as what’s occurring around it. In the realm of machine learning and self-driving technology, the more data collected.




Using the concepts of Transfer Learning, a pre-trained model can be modified for the purposes of detection of different kinds of objects and their classification. This functionality is very important in the real-world autonomous navigation by a vehicle.

Deep learning using Convolution Neural Networks (CNNs) is being used to detect and classify the traffic lights which can convey the important navigation information to an autonomous vehicle.

One more area where deep learning is being used in autonomous vehicles is the identification of the lanes at pixel level using Fully Convolutional Networks (FCNs). This helps in making sure that all the lane and traffic rules are followed by an autonomous vehicle.


How self-driving cars work.

AI technologies power self-driving car systems. Developers of self-driving cars use vast amounts of data from image recognition systems, along with machine learning and neural network, to build systems that can drive autonomously.


The neural networks identify patterns in the data, which is fed to the machine learning algorithms. That data includes images from cameras on self-driving cars from which the neural network learns to identify traffic lights, trees, curbs, pedestrians, street signs and other parts of any given driving environment.
For example, Google's self-driving car project, called Waymo, uses a mix of sensors, LiDAR (light detection and ranging -- a technology similar to radar) and cameras and combines all of the data those systems generate to identify everything around the vehicle and predict what those objects might do next. This happens in fractions of a second. Maturity is important for these systems. The more the system drives, the more data it can incorporate into its deep learning algorithms, enabling it to make more nuanced driving choices.
How the vehicle travels from one location to other location?
·        The driver (or passenger) sets a destination. The car's software calculates a route.
·        A rotating, roof-mounted Lidar sensor monitors a 60-meter range around the car and creates a dynamic 3D map of the car's current environment.
·        A sensor on the left rear wheel monitors sideways movement to detect the car's position relative to the 3D map.
·        Radar systems in the front and rear bumpers calculate distances to obstacles.
·        AI software in the car is connected to all the sensors and collects input from Google street view and video cameras inside the car.
·        The AI simulates human perceptual and decision-making processes using deep learning and controls actions in driver control systems, such as steering and brakes.
·        The car's software consults Google Map for advance notice of things like landmarks, traffic signs and lights.
·        An override function is available to enable a human to take control of the vehicle.


What are the Challenges with Autonomous Cars?

The challenges range from the technological and legislative to the environmental and philosophical.


Lidar and Radar
Lidar is expensive and is still trying to strike the right balance between range and resolution. If multiple autonomous cars were to drive on the same road, would their LiDAR signals interfere with one another? And if multiple radio frequencies are available, will the frequency range be enough to support mass production of autonomous cars?
Weather Conditions
What happens when an autonomous car drives in heavy precipitation? If there’s a layer of snow on the road, lane dividers disappear. How will the cameras and sensors track lane markings if the markings are obscured by water, oil, ice, or debris?
Traffic Conditions and Laws
Will autonomous cars have trouble in tunnels or on bridges? How will they do in bumper-to-bumper traffic? Will autonomous cars be relegated to a specific lane? Will they be granted carpool lane access? And what about the fleet of legacy cars still sharing the roadways for the next 20 or 30 years?

Artificial vs. Emotional Intelligence
Human drivers rely on subtle cues and non-verbal communication like making eye contact with pedestrians or reading the facial expressions and body language of other drivers to make split-second judgement calls and predict behaviors. Will autonomous cars be able to replicate this connection? Will they have the same life-saving instincts as human drivers?



Predicting agent behavior: It’s currently difficult to entirely understand the semantics of a scene, the behavior of other agents on the road and appearance cues such as blinkers and brake lights. Not to mention, predicting human error such as when a person signals a left turn but actually turns right.

Understanding perception complexity: Self-driving vehicles fail when objects are blocked from view such as during snowstorms, objects viewed in a reflection, fast moving objects around a blind spot and other long-tail scenarios.  

Cyber security threatsSoftware is written by humans, and humans write code with vulnerabilities. Although very few people understand neural networks well enough to exploit these vulnerabilities, it can and will be done.

Continuous development and deployment: One problem facing self-driving vehicles is the process of re-validating changes to the software. If and when the code base changes, does this require testing for another 275 million miles to validate performance?

The future of self-driving cars

Despite the definite problems, self-driving car companies are moving forward and improving every day. 
Considering an estimated 93% of car accidents are caused by human error, the opportunity for self-driving cars to remove a major threat in the daily lives of billions of humans is too great to pass up. There will be many debates over the efficacy of self-driving cars as well as regulatory hurdles before we see Level 5 autonomy deployed globally.

Advantages of Driver-less Cars
1.   Travelers would be able to journey overnight and sleep for the duration.
     
2. Speed limits could be safely increased, thereby shortening journey times.

3. There would be no need for driver’s licenses or driving tests.

4.Presumably, with fewer associated risks, insurance premiums for car owners would go down.

5. Efficient travel also means fuel savings, simultaneously cutting costs and making less of a negative environmental impact.

6. Greater efficiency would mean fewer emissions and less pollution from cars in general.

7. Reduced need for safety gaps, lanes, and shoulders means that road capacities for vehicles would be significantly increased.

8. elf-aware cars would lead to a reduction in car theft.

9. Passengers should experience a smoother riding experience.

10. Difficult manoeuvring and parking would be less stressful and require no special skills. The car could even just drop you off and then go park itself.

11. Human drivers notoriously bend rules and take risks, but driverless cars will obey every road rule and posted speed limit.

12. Entertainment technology, such as video screens, could be used without any concern of distracting the driver.


Disadvantages of Driver-less Cars
 1. A self-driving car would be unaffordable for most people, likely costing over $100,000. 

2. Self-driving cars would be great news for terrorists, as those vehicles could be loaded with explosives and used as moving bombs.

3. As drivers become more accustomed to not driving, their proficiency and experience will diminish. Should they then need to drive under certain circumstances, there may be problems.

4.  if the car crashes without a driver, whose fault is it: the software designer or the owner of the vehicle? Driverless systems will definitely trigger many debates about legal, ethical, and financial responsibility.

5. Human behaviour such as heavy foot traffic, jaywalkers, and hand signals are difficult for a computer to understand. In situations where drivers need to deal with erratic human behaviour or communicate with one another, the driverless vehicle might fail.

6. Reading road signs is challenging for a robot. GPS and other technologies might not register obstacles like potholes, recent changes in road conditions, and newly posted signs.

7. The road system and infrastructure would likely need major upgrades for driverless vehicles to operate on them. Traffic and street lights, for instance, would likely all need altering.

8. Hackers getting into the vehicle's software and controlling or affecting its operation would be a major concern.

9. Truck drivers, taxi drivers, Uber/Lyft, and other delivery people will eventually lose their jobs as autonomous vehicles take over.

Friday, 1 September 2017

SMART GLASS TECHNOLOGY

           SMART GLASS TECHNOLOGY




INTRODUCTION

We know that, Smart glasses are computing devices worn in front of the eyes. Evidently their displays move with the users head, which leads to the users seeing the display independently of his or her position and orientation. Therefore smart glasses or lenses are the only devices which can alter or enhance the wearer’s vision no matter where he/she is physically located and where he/she looks. There are three different paradigms of how to alter the visual information a wearer perceives.


1. Virtual reality: The goal is to create a fully virtual world for the user to see, interact with and immerse into. The user sees this virtual world only, any other light sources are not affecting the eye. One significant difference to a simple screen is that the actions of the user affect the virtual world. In example movement affects what virtual content the user sees. A famous fictional example of a device creating a virtual world is the Hollyhock from Star Trek. 

2. Augmented reality: The world is enhanced or augmented by virtual objects.The user can see the real world but also perceives virtual content created by a computing device and displayed by an additional light source which doesn’t prohibit the perception of the real world. Interaction with those virtual objects is a way of communicating with the computing devices.

3. Diminished reality: Objects are subtracted from scenes by filtering the light reflected or emitted by those objects towards the eye. This is most often used in combination with augmented reality to replace the diminished objects by some virtual objects.

                                           Fig.Reality is augmented with a virtual object.



Devices with one display

1.Google Glass

One example of smart glasses with one display is Google Glass which runs the Android operating system. Its specifications are the following:

1. Weight: 50g.

2. Processing: 1.2 GHz Dual-core ARM Cortex-A9 CPU, PowerVR SGX540 GPU, 16GB storage, 682MB RAM. That’s roughly equivalent to the hardware of an IPhone 4.

3. Camera: 5MP still (2528x1856 pixels) or 720p video. There is no flash.

4. Display: It is a color prism projector with a resolution of 640x360 pixels.

5. Sensors: microphone, accelerometer, gyroscope and compass.

6. Interaction: There is a long an narrow touch pad which supports swipe and tap gestures. The camera can be triggered by a button.

7. Audio: There is a bone conduction transducer for audio. Sound reaches the inner ear in form of vibrations on the scull. Note that this technology is audible by the hearing impaired as well as persons with normal hearing.

8. Communication: It has no cellular modem which means it can not make phone calls on its own. It does have Bluetooth and WLAN 802.11b/g Google Glass is supposed to be used in combination with a smartphone and one of its main uses is to display notifications in a convenient and quick way It is supposed to be priced similarly to a high end smartphone but there are no official announcements concerning the exact price or release date.
                                            Fig.Google Glass developer version.

2. Reckon MOD

There are also many devices designed for use during sports. Similar to Bruckner Travis they need to function in a rough environment but also should not be heavy. One example of dedicated sports smart glasses are the Reckon MOD. The Reckon MOD are snow sports smart glasses. They can operate at temperatures from 20  to 30 degree c, weigh approximately 65g and are water resistant. Interaction done through a wrist remote. The main use of Reckon MOD is displaying maps and performance statistics.
                                                    Fig. Reckon mod.


Bruckner TRAVIS 

It is visible in figure that Google Glass does not have a very sturdy design and that it is made for consumers. It is not made for rough environments such as industrial sites or factories. One example of industrial smart glasses is the Bruckner TRAVIS shown in figure. This device is a lot heavier than Google Glass because the processing is done in a embedded PC worn in a vest. It is controlled with six hardware buttons and its main applications are streaming video and displaying manuals to employees.

                                                     Fig: Br¨uckner TRAVIS.

Devices with two displays:-

Smart glasses with two displays can affect everything the wearer sees and could display 3 dimensional content. This makes it possible to create a virtual, augmented or diminished reality. Both systems with two displays presented in this section need to be connected to a PC with a cable by which the virtual objects are created. In the future similar devices could be wireless and worn outside.

1.Cast AR

An exciting new technology which is used to create a augmented indoor reality is Cast AR. It has a projector above each eye which projects onto a retro reflector with 120hz each creating a 3D image. A retro reflector is a surface that reflects light back to its source with a minimum of scattering.

Nevertheless some of the light of each projector will reach the eye it is not destined for. To deal with this, Cast AR has active shutter lenses. The projectors are active in disjoint small time intervals. While the projector above one eye is not active the active shutter lens of that eye will stop any light from reaching that eye. This happens at such a high speed that the human eye can not notice. The result is a stereoscopic 3D image. Cast AR tracks head movement and orientation using an infrared camera and infrared LEDs inside the retro reflector. The exact position is calculated by triangulation in hardware on the glasses. This makes it possible to adjust the orientation of the virtual objects with only a few millisecond delay to head movement. Many people can share one retro reflector each seeing a different scene or the same scene from different angles. Another advantage of Cast AR compared to other smart glasses is that the angles.

                                                        Fig. Cast AR.

focuses on items in a distance rather than a screen in front of the eyes. This makes it possible to use Cast AR for long time periods without eye strain.

2. Oculus Rift

The oculus rift is a virtual reality solution which uses two display placed in front lenses close to the eyes of the wearer. There is one display in front of each eye, together they have a 1920x1080 pixel resolution on the newer protoOculus Rift Crystal cove prototype types. For Oculus Rift it is very simple to create 3D scenes because each display which may be adjusted. Oculus Rift tracks head movement using infrared LEDs like Cast AR but it also relies on a gyroscope and accelerometer. The advantage of tracking with a gyroscope and accelerometer is a very low latency, the disadvantage compared to the infrared solution is that over time errors accumulate and there might be orientation drift. By combining both methods Oculus Rift implements precise low latency head tracking. As already mentioned Oculus Rift is used to create a virtual reality. No light from the environment reaches the eye. The advantage is that there is no need for any display surface in the room and the whole field of vision can be occupied by virtual scene.
.              
      Fig.Oculus Rift crystal cover prototype.


APPLICATION:-

It is used in different fields such as:

1. Education.
2. Medical.
3. Entertainment.
4. Universal remote control.
5. Documentation.
6. productivity.
7. sport.
8. commerce.

Note:- Source Internet.










Sunday, 27 August 2017

Seven Futuristic Cars Designs That Will Blow Your Mind

Seven Futuristic Cars Designs That Will Blow Your Mind

Fifty years ago, any modern-day car would have appeared futuristic to those around at the time. Thanks to quickly evolving automobile technology, though, it's become increasingly rare that any vehicle design surprises people. Every so often, however, a concept vehicle comes along that's so futuristic in nature that it demands everyone's attention. In the past decade, everyone from Audi to Mercedes-Benz has taken a stab at creating one of these vehicles.


1. Audi Shark.

         In 2008, Milan's Domus Academy and Audi co-sponsored a concept vehicle competition, and it was Kazin Doku's entry, the Audi Shark, which flew away with top honors. The vehicle's shape really does resemble a shark gliding effortlessly through the water, and since it's more of a hovercraft than an actual car, it really would move effortlessly. The vehicle seats two, and although it has several motorcycle-like features, including a driving position similar to that used on two-wheeler, the Shark has the outer shell and protection that has consistently made cars safer to drive than motorcycles. Add this to the comfortable seats and great looking LED lights, and the Shark is undoubtedly the car of the future.



 2. Audi Airomorph.

               The Audi Airomorph could very well be the most futuristic-looking vehicle on this entire list. It was designed by Eric Kim, a new graduate from the Art Center College of Design.
By stretching an expansion-resistant material made of silver over the vehicle's frame, Kim created a design that could be adjusted to fine tune the Airomporph's aerodynamics while lapping a course. This is undoubtedly why it's considered the race car of the future.




3. Mercedes-Benz BIOME.

             Those who think hybrid or electric cars are environmentally friendly will have their entire world shaken when they eventually see the Mercedes-Benz BIOME. It will weigh less than 900 lbs, and this is thanks to the lightweight plant material that would be used to create it. That's right: plant material. Instead of being produced, the BIOME is grown in a lab.



4. SCARAB by David Gonçalves.

The SCARAB is yet another motorcycle-like vehicle that comes with the added protection of an enclosed cab. It also has a luggage storage area, and this combined with the ability to park vertically make it especially handy when traveling. Its three wheels, 4WD design and ability to change both shape and elevation make it exceptionally easy to maneuver.
Many view the SCARAB as an alternative and healthy method of urban transport, and it seems to easily live up to that expectation.



5.Mercedes-Benz F 015.

Most people have seen, or at least heard of Google's driver-less cars. Not to be outdone, however, Mercedes-Benz is jumping into the market with the F 015. It offers the same driver-less technology, but the vehicle's passengers get to enjoy luxury during the ride.
The interior is like a lounge, and there are four chairs accessible once the stylish French doors are open. There are display screens in the interior that allow passengers to interact with the vehicle, and the F 015 can even be remotely controlled through a undeniable luxury sedan of the future.



6. MOY concept car.

Everyone has seen vehicles layered in vehicle wrap driving around town advertising a new company. Doing this often requires a custom paint job or vehicle wrap, and periodically changing the design is usually cost-prohibitive. The MOY Concept Car, however, has changed this forever. By integrating LCDs and LEDs, along with electro chromic foil and liquid crystals, the vehicle can display custom graphics, and even videos, for thousands of potential customers to see.




7.Peugeot Metromorph.

Just imagine futuristic cities with vehicles leaving the roadways and traveling up the sides of buildings. If you can visualize this in your future, you may have a small idea of what the Peugeot Metromorph is like. This futuristic vehicle is designed for the city of the future; a city that has vertical roads integrated into its most important buildings.
The Metromorph could travel up these buildings and even park outside on their exteriors. From here, they can double as balconies, saving untold amounts of space when it comes to parking. Designer Roman Mistiuk might have unwittingly decided how future cities will be constructed and designed.


Note:- Source Internate.

The Future On Your Way: Self Driving Car

    The Future On Your Way: Self Driving Car      What is Self-Driving Car?             A Self-driving vehicles (cars, bikes and t...