Nvidia Jetson Xavier and Isaac aims to launch a new era of autonomous machines

Nvidia has been a big proponents of GPU computing to power AI and driverless cars, but only now is Nvidia introducing a robotics platform unto its own.

Jensen Huang, President and CEO of Nvidia, enthusiastically introduced Jetson Xavier as the single biggest robotics project Nvidia has worked on for years. Introduced as the brand’s first ever computer designed specifically for robotics, it combines a Volta Tensor Core GPU, an eight-core ARM64 CPU, dual NVDLA deep learning accelerators, an image processor, a vision processor and a video processor.

Nvidia claims that Jetson Xavier is equipped with more than nine-billion transistors designed to over 30 trillion operations per second. Huang said Nvidia’s new robotics computer is more powerful than Nvidia Titan Xp and a $10,000 workstation, all while only needing 30 watts to operate.

And with a price of only $1,299 (about £970, AU$1,700) as part of a Jetson Xavier Devkit, it could help power the next generation of robotics well beyond autonomous cars. Early access for the Jetson Xavier Devkit will begin in August.

Beyond a single piece of hardware, Nvidia also announced a new Isaac platform to power the next generation of autonomous machines. Nvidia Isaac includes new hardware, software and a virtual-world robot simulator to introduce artificial intelligence capabilities to robots for manufacturing, logistics, agriculture, construction and many other industries.

Go to Source

How to watch Asus ROG phone live stream

Computex 2018 is kicking off and all eyes are on Asus as its Republic of Gamers (ROG) department is set to launch its own smartphone, alongside other new products such as gaming laptops. Here’s how you can watch a live stream of the press conference.

The conference is entitled ‘For Those Who Dare’.

When is the Asus ROG phone launch?

The Asus ROG press conference at Computex 2018 in Taipei will take place on 4 June at 6:30pm local time, that means 10:30am in the UK.

How to watch the Asus ROG phone launch

We have embedded the official live stream for the Asus ROG event at the top of this very page, so you can watch it here.

What to expect from the Asus ROG Computex 2018 launch

As mentioned already, the hyped device for this launch is a rumoured Asus ROG smartphone. Asus isn’t overly well-known for phones in the UK market but it has been making them for years. Check out the teaser below showing what looks like a phone attached to a gaming controller.

A gaming focused phone would take on the recently launched Razer Phone, but that won’t be the only new product announced.

Asus has teased other devices including what looks like a very thin gaming laptop (below). We can expect multiple laptops and other products, perhaps routers and more.


Go to Source

Logitech announces its most affordable wireless gaming mouse yet

Logitech has announced a new wireless gaming mouse that’s designed to be portable and extremely affordable.

The Logitech G305 LightSpeed is the company’s most inexpensive wireless gaming mouse yet, priced at $59 (about £40, AU$80, AED 220). Despite its low price, however, this isn’t a basic gaming mouse. It packs the same LightSpeed wireless technology found in Logitech’s flagship gaming mouse, the G903, while also incorporating the low-energy Hero sensor found in the G603.

Logitech claims that with a single AA battery users will be able to get 250 hours of continuous gameplay out of the G305 while in ‘Performance’ mode, which activates the 1ms report rate. Switching the mouse to Endurance mode, with an 8ms report, will supposedly extend the battery life to up to nine months.

The Logitech G305’s Hero sensor is also rated for 400 inches-per-second of precision, and sensitivity up to 12,000 DPI. Lastly, this mouse is designed to be durable, compact and lightweight at only 99g, so it could be a great portable gaming mouse to bring to conventions and eSports events.

Logitech has proven time and again that wireless peripherals are the future of PC gaming, but the missing piece has been affordability – so the Logitech G503 may just be the breakthrough we’ve been waiting for.

  • All of the best mice in the world could soon be wireless

Go to Source

Microsoft reportedly acquires GitHub, the world’s largest source code platform

Growing rumors that Microsoft will acquire collaborative coding startup GitHub have been further substantiated by a report from Bloomberg, which claims that the software giant has agreed to the acquisition.

Last valued at $2 billion (in 2015), GitHub is responsible for hosting the single largest collection of code in one place, acting as the repository for projects ranging in scope from small developer teams to the likes of Google.

GitHub has been hunting for a new CEO for the last nine months, after co-founder Chris Wanstrath announced his intentions to step down. The full details of the acquisition are yet to be revealed but Bloomberg suggests they are very imminent.

The coding company was allegedly impressed by the active approach Microsoft’s current CEO, Satya Nadella, has taken with supporting developers and other coding initiatives since he took on the role in 2014.

Repercussions

So what does it mean for the world’s biggest crowd-sourced code repository to be bought by the world’s biggest software company? For one, it potentially gives Microsoft access to over 26 million software developers and their code.

While this may result in some more exciting and creative software being developed by Microsoft, the details on the degree of ownership and control it would have over the code stored on GitHub are yet to be made clear, and users are cautious. 

A Twitter poll from Bryan Lunduke showed that 68% of existing GitHub users would move to another service if the acquisition went ahead, and only 25% of Twitter users asked by Tom Warren of The Verge thought that it was good news.

How this story will pan out is yet to be determined, so stay tuned for the full announcement of the alleged acquisition and what the two companies have to say about its affect on their respective users.

[via Bloomberg]

Go to Source

Tomorrow’s Cities: How Barcelona shushed noise-makers with sensors

Media playback is unsupported on your device

In the heart of the bustling city of Barcelona is a square that at first sight seems like an oasis of calm. The Plaza del Sol, as the name suggests, is a suntrap and the perfect place to while away a few hours.

The problem is that the square is just too popular and for many of the city’s young inhabitants has become the number one venue to meet friends and hang out until the small hours.

One resident said it was like living in a permanent party.

Even the shops around the square reflect its reputation for late-night carousing, selling beer, pizza and little else.

The situation had become unbearable for those with apartments around the square, who have lived with unacceptable noise levels for the past 20 years.

Step in Barcelona’s fabrication laboratory, one of a network of 1,200 workshops around the world that allow people to test out new designs and ideas, and build products and new technology using a range of cutting-edge tools. Labs share their designs online so that something built in Boston can be replicated in a lab in Shenzhen.

With the help of some EU money, the lab built low-cost, easy-to-use sensors that can detect air pollution, noise levels, humidity and temperature.

“This was not only about being part of a scientific project but about enabling political action,” said Tomas Diez, who runs the lab.

Families placed the sensors on their balconies and were able to demonstrate that night-time noise levels – with peaks of 100 decibels – were far higher than World Health Organization recommendations.

Armed with this information, the residents went to the city council, pressing them to rethink the use of the plaza.

Police now move people on at 23:00. Rubbish lorries, which had previously cleared up when the partygoers left in the early hours, have been rescheduled for the morning, and steps that provided seating for gatherers have now been filled with plant boxes.

“Now the square is not just for people who want to party at night,” said Mr Diez.

His vision for fab labs goes further, imagining them as a vehicle to allow cities to become truly self-sufficient. They can provide its citizens with technology to grow their own food, 3D-print new products whenever they need them and offer them the tools they need to fight the growing problems of urbanism.

He, along with other technologists, designers and architects, is backing a project known as Fab City – a collective of 18 cities from Europe, India, China and America – that aims to create more sustainable and productive cities around the world in the next 30 years.

The vision is partly idealistic, harking back to pre-Industrial Revolution days when people made their own clothes and goods and shopped locally.

But it is also about improving the environment, moving from a globalised world where goods are shipped from China in huge container ships to a place where people can pick up a blueprint from a fab lab and create products in their own city.

“We are trying to build a new productivity model for society, creating a new sustainable economy in cities where people can prototype and test ideas,” said Mr Diez.

The data collection in Barcelona was part of an EU project – Making Sense – that aims to empower citizens through “personal digital manufacturing”.

It is part of a wider attempt to rethink smart cities and put control back firmly in the hands of citizens.

“We wanted to end this top-down approach where cities go to companies and ask them to build infrastructure and then pretend that is a smart city,” said Mr Diez.

And along the way, he hopes to build a new type of digital economy, one in which citizens own and control their own data – what he calls “citizen-based infrastructure”.

Giving teenagers a say

It is not just fab labs that are hoping to use data to empower people.

Sensors in a Shoebox was a project set up earlier this year in Detroit, aiming to give local teenagers a say in urban planning.

The project provides compact sensor kits that allowed the children to collect a range of data from two different locations – one on the waterfront and the other in a local park.

“We took the view that we can’t have smart, connected cities without smart and connected young people,” said Elizabeth Moje, dean of the school of education at the University of Michigan, which headed up the project.

Air quality was important to the teens – a personal issue for many in a city where one in six residents lives with asthma.

The children also learned the limitations of data collection.

While they were able to measure the number of people who used a certain area, they also had to go out and see for themselves what type of person used the area, explained Prof Moje.

“If there were more elderly people, you might want more benches – if there were young children, more play areas,” she said.

“The youngsters learned that a piece of technology can’t answer every question.”

It was also important that the children saw that the data they collected and the observations they had made could have an impact on city planning.

Their recommendations were passed on to representatives from the mayor’s office, community groups and Citizen Detroit.

“We have to teach children to be critical thinkers, as well as how to be good and productive citizens. This enabled them to learn about their communities, even if they don’t go on to be engineers or social scientists,” said Prof Moje.

Dr Jennifer Gabrys leads the Citizen Sense project at Goldsmiths, University of London, which aims to research how effective citizen-led sensor projects are.

Giving people the tools to collect their own data can sometimes be a “perfunctory gesture to make smart city projects more palatable”, she said.

“Cities increasingly have so many sensors generating so much data, some of which people have access to and some of which they don’t.

“People may not want everything monitored and who decides and shapes this agenda is a big question for future urban democracy.”

Go to Source

macOS 10.14 appears to be getting an upgraded dark mode and an Apple News app

Apple’s Worldwide Developer Conference (WWDC) for 2018 is about to get underway tomorrow, and that usually means a plethora of updates on software for iPhones, Macs, Apple TVs, and all the other gear Apple makes. It looks like one new macOS feature just leaked out early.

Developer Steve Troughton-Smith spotted that one of the 30-second previews prepared by Apple shows a dark mode for Xcode 10, one of the programming tools available for macOS. The darker Trash icon in the dock suggests that this could be a system-wide feature, though we’re going to have to wait until tomorrow to find out for sure.

macOS already features an optional dark mode, but it only applies to the dock and menu bar rather than program interfaces. If Xcode 10 has been given a new dark gray look then it’s possible that other Apple applications are going to follow suit.

dark macOS tweet

Credit: @stroughtonsmith

(Image: © @stroughtonsmith)

Also in the dock on the video preview: an Apple News app. That suggests Apple is going to extend Apple News from iOS to macOS, so you’ll be able to keep up with news stories on the topics you’re interested in from your laptop or desktop machine as well.

We’ve heard a ton of other rumors about what to expect from the software updates being announced tomorrow – iOS 12, for instance, is said to be getting a phone-to-phone AR capability that lets you enjoy augmented reality with your friends, as well as a digital health check-up tool that makes sure you’re not using your phone more than you should be.

What we won’t get, according to the most recent speculation anyway, is refreshes for Apple’s various hardware lines. The main keynote gets underway at  at 10am PT, 1pm ET, 6pm BST, and Tuesday 3am AEST if you’re in Australia.

Go to Source

Are you scared yet? Meet Norman, the psychopathic AI

Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock’s Norman Bates, it does not have an optimistic view of the world.

When a “normal” algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: “A group of birds sitting on top of a tree branch.”

Norman sees a man being electrocuted.

And where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view.

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

These abstract images are traditionally used by psychologists to help assess the state of a patient’s mind, in particular whether they perceive the world in a negative or positive light.

Norman’s view was unremittingly bleak – it saw dead bodies, blood and destruction in every image.

Alongside Norman, another AI was trained on more normal images of cats, birds and people.

It saw far more cheerful images in the same abstract blots.

The fact that Norman’s responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT’s Media Lab which developed Norman.

“Data matters more than the algorithm.

“It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves.”

Artificial intelligence is all around us these days – Google recently showed off AI making a phone call with a voice virtually indistinguishable from a human one, while fellow Alphabet firm Deepmind has made algorithms that can teach themselves to play complex games.

And AI is already being deployed across a wide variety of industries, from personal digital assistants, email filtering, search, fraud prevention, voice and facial recognition and content classification.

It can generate news, create new levels in video games, act as a customer service agent, analyse financial and medical reports and offer insights into how data centres can save energy.

But if the experiment with Norman proves anything it is that AI trained on bad data can itself turn bad.

Racist AI

Norman is biased towards death and destruction because that is all it knows and AI in real-life situations can be equally biased if it is trained on flawed data.

In May last year, a report claimed that an AI-generated computer program used by a US court for risk assessment was biased against black prisoners.

The program flagged that black people were twice as likely as white people to reoffend, as a result of the flawed information that it was learning from.

Predictive policing algorithms used in the US were also spotted as being similarly biased, as a result of the historical crime data on which they were trained.

Sometimes the data that AI “learns” from comes from humans intent on mischief-making so when Microsoft’s chatbat Tay was released on Twitter in 2016, the bot quickly proved a hit with racists and trolls who taught it to defend white supremacists, call for genocide and express a fondness for Hitler.

Norman, it seems, is not alone when it comes to easily suggestible AI.

And AI hasn’t stopped at racism.

One study showed that software trained on Google News became sexist as a result of the data it was learning from. When asked to complete the statement, “Man is to computer programmer as woman is to X”, the software replied ‘homemaker”.

Dr Joanna Bryson, from the University of Bath’s department of computer science said that the issue of sexist AI could be down to the fact that a lot of machines are programmed by “white, single guys from California” and can be addressed, at least partially, by diversifying the workforce.

She told the BBC it should come as no surprise that machines are picking up the opinions of the people who are training them.

“When we train machines by choosing our culture, we necessarily transfer our own biases,” she said.

“There is no mathematical way to create fairness. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities.”

What she worries about is the idea that some programmers would deliberately choose to hard-bake badness or bias into machines.

To stop this, the process of creating AI needs more oversight and greater transparency, she thinks.

Prof Rahwan said his experiment with Norman proved that “engineers have to find a way of balancing data in some way,” but, he acknowledges the ever-expanding and important world of machine learning cannot be left to the programmers alone.

“There is a growing belief that machine behaviour can be something you can study in the same way as you study human behaviour,” he said.

This new era of “AI psychology” would take the form of regular audits of the systems being developed, rather like those that exist in the banking world already, he said.

Microsoft’s chief envisioning officer Dave Coplin thinks Norman is a great way to start an important conversation with the public and businesses who are coming to rely on AI more and more.

It must start, he said, with “a basic understanding of how these things work”.

“We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right,” he said.

“When I see an answer from an algorithm, I need to know who made that algorithm,” he added.

“For example, if I use a tea-making algorithm made in North America then I know I am going to get a splash of milk in some lukewarm water.”

From bad tea to dark thoughts about pictures, AI still has a lot to learn but Mr Coplin remains hopeful that, as algorithms become embedded in everything we do, humans will get better at spotting and eliminating bias in the data that feeds them.

Go to Source