Cleaning up: Britain goes a whole week without coal-generated electricity

For the first time since the late 19th century, Britain has gone for an entire week without using coal-generated electricity. 

The National Grid Electricity System Operator (ESO), which which runs the network in England, Scotland and Wales, confirmed that on May 8 that Great Britain has not used any coal for the production of electricity for a full seven days – a landmark figure of 168 hours without coal-powered electricity. There have been over 1,000 hours of coal-free electricity production so far in 2019.

The UK government has already committed to dramatically reducing coal-generated power. The long-term aim is to completely phase out the use of coal by 2025, and for the UK to be the first major economy to pass legislation for net-zero emissions. 

National Grid ESO says it believes that it will be able to operate the British electrical system on a zero carbon basis. The UK has been gradually moving towards higher use of renewable energy sources, but coal-powered stations still exist as a backup.

The new normal

National Grid ESO says that we should expected periods without coal usage to become increasingly common in the future. In a statement sent to TechRadar, company director Fintan Slye said: “While this is the first time this has happened, I predict it will become the ‘new normal’.” This is something that is welcomed by environmental groups and industry bodies.

Slye explained that new technology has been key to making progress. “Zero carbon operation of the electricity system by 2025 means a fundamental change to how our system was designed to operate – integrating newer technologies right across the system – from large scale off-shore wind to domestic scale solar panels to increased demand side participation, using new smart digital systems to manage and control the system in real-time.”

He added: “To help us reach today’s significant milestone, we have been working with industry over the last few years to ensure the services we require to operate the network are not dependent on coal.

“We have been forecasting the closure of coal plant and reduced running for some time – due to us having to manage more renewables on the system. Transmission owners have invested in their networks accordingly and we have refined our operational strategies and real time operation of the network to ensure continued secure and economic operation.”

The race to net-zero

RenewableUK is the trade association for the renewables industry, promoting the adoption of clean energy systems. In a statement to TechRadar, deputy chief executive Emma Pinchbeck says that wind and other renewable energy sources are playing an important role in reducing carbon emissions: “Wind has become a mainstream power source for the UK, providing up to 35% of our electricity over the weekend.

“Renewables overall are playing a leading role in our energy mix – and have been crucial to phasing out dirty coal. The coal phase out is just the beginning of a move away from fossil fuels to low carbon living, to avoid the enormous risks of climate disruption. Last week, the Committee on Climate Change said we can only achieve net-zero emissions with a massive increase in renewables.

“Government has been told to act now to build on the coal phase out, investing in our world-leading renewable industry and the jobs it brings, including technologies that are absent from Government policy, from innovative wave and tidal to cheap onshore wind.”

But, while there is an increasing shift to the use of renewables, in phasing out the use of coal, there’s also been an increase in the use of gas-powered electricity. This is still a concern, and the 2008 Climate Change Act requires an 80% reduction in emissions from 1990’s levels by 2050.

Go to Source

Intel roadmap confirms 10nm ‘Tiger Lake’ chip with Xe graphics, more Ice Lake and Lakefield details

Intel extended its public microprocessor roadmap through 2020 on Wednesday, confirming the existence of “Tiger Lake,” a 10nm Core chip due in 2020 that features an entirely new microarchitecture and Intel’s forthcoming Xe graphics.

Executives also began disclosing some of the performance improvements associated with its previously-announced chips, such as how fast Intel’s first 10nm chip, Ice Lake, will be compared with the previous generation. Intel also began talking a bit about the improvements in “Lakefield,” which stacks logic together to create a denser system-on-a-chip.

Combine the new 10nm “Ice Lake” core—which executives said would ship in June—plus the redesigned Tiger Lake chip, as well as Intel’s other major announcement, 7nm chips by 2021, and Intel is at least talking more aggressively than it has in years. 

Intel Ice Lake Tiger Lake 10nm roadmap Intel

Not quite a formal roadmap, but close.

Ice Lake 

Speaking at Intel’s investor conference Wednesday, Murthy Renduchintala, Intel’s chief engineering officer, said that it’s no secret that Intel has struggled with 10nm development.

Intel Ice Lake Intel

Intel’s Ice Lake isn’t that far away.

“In discussions with many of you, the belief is that Intel’s process technology has slowed down over time,” Renduchintala added. Wednesday’s message? That’s no longer the case.

Ice Lake, Renduchintala said, takes full advantage of the 10nm technology. Though he didn’t disclose performance, he did provide some generation-over-generation comparisons, albeit with no real specifics. It’s interesting that Intel’s not talking directly about CPU integer performance; instead, Intel believes that Ice Lake will deliver 2.5 times to 3 times the “AI performance” of a prior-generation chip, and twice the graphics performance.

Intel Ice Lake architectures Intel

More releases predicated upon Ice Lake are coming, too.

Ice Lake also contains what Intel refers to as “Generation 11” performance, which apparently will be branded as a “Next Gen Graphics Iris Plus Experience,” if the boilerplate text in Intel’s presentation is any indication. Gregory Bryant, the senior vice president and general manager of the Client Computing Group, told investors that the integrated graphics is powerful enough to play hundreds of games at 1080p resolutions at 30 (not 60) frames per second.

Tiger Lake

According to Renduchintala, the lead product for the 7nm generation will actually be a GP-GPU for the datacenter in 2021, based upon the new Xe architecture that Intel is developing. PC users, however, will be focusing on Tiger Lake.

A single student has solved the problem of cheap, non-invasive glucose testing

US student Bryan Chiang appears to have done what several decades of big pharma investment, the billions in the Google innovation fund, Apple, and numerous failed medical start-ups haven’t managed: he’s cracked the impossible problem of checking a person’s blood glucose level without having to smear some actual blood on a sensor or poke a wire into the skin.

While there are some modern tech solutions to blood tests out there today in the form of continuous glucose monitors (CGMs), the costs start in the hundreds of pounds per month and they all require attachment to the body and access to its blood and fluids in some way to get a result. A smartphone option that offers live glucose readings for a fraction of a cost – and no tubes, blood, patches or disposables – would be a genuine game changer for millions of people.

This non-invasive dream has remained elusive due to the complexities in spotting blood glucose molecules through the barrier of our skin, with numerous failed schemes promising bloodless sci-fi test results before disappearing without trace, or at least not talking about their plans since 2014, like the promising but presumably still fictional GlucoWise, to quote but one example of a promised tech solution that has so far failed to materialize. But Chiang and his phone seem to have done it.

Hence Microsoft has handed its $100,000 (about £77,000, AU$144,000) 2019 Imagine Cup innovation award to Chiang, whose non-invasive EasyGlucose testing method appears to both exist and provably work.

It uses a custom lens attached to a smartphone to generate a high-res image of the eye for analysis, then via the mysterious modern tech salve methods of deep learning, neural networks and Chiang’s own algorithm, accurately associates minor changes in the ridges and features of the iris in the human eye to the glucose levels within the bloodstream.

EasyGlucose

EasyGlucose measures minor changes in the iris and correlates them with blood glucose levels (Image credit: Imagine Cup)

The crucial thing now is to match the claimed results to the hype. We saw Google literally pretend that it was possible to read glucose levels in tears a few years ago, but the smart contact lens project was quietly binned recently when medical development partners discovered that, er, it didn’t work at all In fact, there’s little-to-no correlation between eye fluid glucose and the body’s blood levels, making it useless for tracking the rapid glucose changes seen in the bodies of insulin injectors.

Why is glucose testing so important?

 Regularly checking your blood glucose level is a critical part of managing diabetes, particularly the Type 1 variant, as it requires constant monitoring and insulin injections to manage glucose levels. Go too low and you risk anything from wobbly hypos to sudden unconsciousness and fits; stay too high for too long and the reward is potential damage to the body and long-term complications. That’s why more blood testing, more often, is advised to stay on top of these fluctuations, although when it involves making yourself bleed five or six times a day it’s not the most fun thing to do. 

That’s why everyone with diabetes in their lives wants a non-invasive option; something that makes seeing what your blood’s on as easy as checking the time. Microsoft says that EasyGlucose’s results at this early stage are within seven per cent of industry error norms, with 100 per cent of results falling within the ‘clinically accurate’ frame. 

If that is indeed true and replicable even by clipping the $10 (about £7.50, AU$14.50) lens attachment on the five-year-old Huawei your mother’s hoping to nurse through until at least 2022, it’ll be a rare, genuine case of some kid and a smartphone trapping, taming, and triumphantly riding home on one of the medical world’s longest-running unicorns. 

Go to Source

AI’s memory is perfect for insight into collective behaviour

AI was never intended to give insights into collective behaviour, yet it’s becoming an increasingly efficient method of doing so. 

In an age of the GDPR fearful, collective behaviour is the way forward to understanding consumer preferences and AI’s memory of data allows this to happen without jeopardising individual behaviour. 

Early beginnings

Alan Turing was recently named as the most ‘iconic’ figure of the 20th century. Perhaps this is because of the explosive interest and power that artificial intelligence is set to have on our world in the near future. 

He was a mathematician who cracked codes during World War II and praised with shortening the war by several years due to his work at Bletchley Park. Here, he was tasked with cracking the ‘Enigma’ code and, with another code-breaker, invented a machine known as the Bombe which has had a huge influence on the development of computer science and artificial intelligence.

Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so machines should, in theory, be able to do the same. This was the logical framework of his 1950 paper, Computing Machinery and Intelligence, in which he discusses how to build intelligent machines and how to test their intelligence.

After a conference in 1956 where, what is considered by many, to be the first AI programme was presented, a flurry of interest in AI ensued. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms improved and people got better at knowing which algorithm to apply to their problem. However, a mountain of obstacles were uncovered and things began to slow down. 

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. This led to some of AI’s greatest achievements such as the defeat of reigning world chess champion and grandmaster Gary Kasparov.

Image credit: Pixabay

Image credit: Pixabay

(Image: © Image Credit: Geralt / Pixabay)

AI’s memory

As the history of AI shows, it’s creation occurred due to a need to solve issues at a faster rate than humans. Yet, a by product of solving these issues is a wealth of data which is effectively the AI’s ‘memory’. This creates a database of insight into people’s behaviour from what the AI has collected.  

For example, an area in which AI has been particularly beneficial in is matching people with one another based off shared interests. Technology has long been thought of as a way to bring people together but in the past these people have at least known one another. Now, AI can bring people together who are likely to be friends but may not have even ever met. This is the aim of the AI.

An example of AI bringing people together can be seen with the company badi. We utilise AI’s capability in this area in response to the rise in room-rentals where people who don’t know each other often live with one another. Previous business models that facilitate room-rentals have concentrated on the renter and the potential flat but happiness in flatshares is often determined by housemates, rather than the standard of the flat.  

Levels of insight

To do this, three levels of insight are collected and combined and used to help facilitate better housemate ‘matches’. This is AI fulfilling its role of doing something much faster than a human would be able to, particularly as it can analyse data and ‘read’ people’s profiles faster.  

1. Personality traits – these are based on a set of questions which a person is asked. This includes their age, gender, occupation, where they live, where they want to live and their preferences for another flatmate.

2. Behavioural data – this analyses the actions that people are making on badi, such as who a person has sent a request to and how many requests they have sent. This then creates a supposed flatmate preference which will be cross-analysed with the person’s answers in their personality traits to show a more true version of what their preferences are. For example, a person may think they would prefer to live with a man who works but actually their requests show they would prefer to live with a woman who works.

3. Images – the image that a person uploads is analysed by information extracted from images and descriptions which infer a person’s interests to match them to people with similar interests.

While the aim of AI isn’t to give data on people’s behavior preferences when seeking a housemate, the nature of technology means that as all answers are stored in the ‘cloud’ the mass of data from individual responses gives insight into collective behaviour. 

Image credit: Shutterstock

Image credit: Shutterstock

(Image: © Shutterstock)

Collecting data from AI

Ironically, this data from AI is what humans should find most important. When Gary Kasparov lost to AI in a chess match, the advantage that he has for the future is to analyse how the AI was able to beat him. AI will have a store of all past games and know the patterns for each eventuality. By studying the algorithms that AI used, Gary is able to learn where his own weaknesses are and techniques from other people, allowing him to improve for future competitions.

Equally, when matching housemates, an understanding of how and why people are matched allows badi to further tweak AI and target specific people who it knows will be a good match for one another. 

While people may worry about ‘big data’ and the threats it can pose, the current issues around GDPR highlight the issue of individualised information. AI doesn’t focus on individuals when giving information on collective behaviour. Instead it allows individuals to stay anonymous whilst giving information on emerging trends and behaviours which are hugely beneficial for companies wanting to better understand their customers. The example from badi shows how companies don’t need to look at personal information but can instead use statistics (aggregates) to help and improve user journeys. 

While AI was never intended to give data on collective behaviour, the nature of how AI works means that it has an unbeatable memory that can combine information. Companies should begin to utilise this to understand collective trends whilst not compromising individual’s specific data.

Guillem Pons, Chief Data Officer at Badi 

Go to Source

Intel says Ice Lake is on target, and even faster 7nm CPUs will arrive by 2021

Intel has clarified that its long-delayed next-gen processors built on a 10nm process will start to ship in June, and indeed that the chip giant has learned from its mistakes, and will be able to push forward with greater speed dropping to 7nm CPUs promising even better performance come 2021.

This, and a ton of other roadmap and product info, was divulged by Intel at its 2019 investor meeting.

Intel claimed its 7nm products will debut in 2021 starting with Intel Xe, the next-gen graphics solution we’ve been hearing so much about lately, although in this case, the first offering will be a General-Purpose GPU for heavyweight usage in the data center. That will be followed by a 7nm Xeon (server processor).

A consumer-targeted Intel Xe graphics card is expected in 2020, the year before the aforementioned GP-GPU, although that won’t be 7nm – but rather 10nm.

Speaking of 10nm, as we said at the outset, the first Ice Lake offerings will start shipping in June, and Intel’s 10nm ramp predicts that client systems will be on sale for the holiday season at the end of 2019. This is in line with what we’ve previously heard, namely that consumer laptops using Ice Lake processors will be available at the very end of the year. Server offerings using the CPUs will follow in the first half of 2020.

Furthermore, 10nm Tiger Lake processors will follow Ice Lake in 2020, arriving in laptops and packing Intel’s Xe graphics (rather than Ice Lake’s Gen11 integrated graphics), which will be capable of running multiple 4K displays (or driving an 8K monitor).

Pedal to the silicon

As Tom’s Hardware reports, chief executive Bob Swan noted that this forecasted swift move from 10nm to 7nm reflects an acceleration of the production of the process node.

In other words, Intel has learned a lot of lessons from the difficulties in transitioning from current 14nm chips to these incoming 10nm efforts, with the insights and gains therein meaning swifter progress when dropping down to 7nm (and subsequently refining). Or at least that’s the theory.

Intel further noted that Ice Lake (10nm) will effectively double integrated graphics performance levels compared to Coffee Lake CPUs – not surprising as it sports the aforementioned Gen11 graphics we’ve been hearing good things about – as well as doubling video encoding performance, and tripling wireless speeds.

As for 7nm performance, Intel is quoting impressive further steps on from there with gains in power and efficiency, specifically a 15% improvement in transistor performance, and a bigger 20% boost in terms of performance-per-watt.

Finally, it’s worth noting that Intel said that issues with stock shortages of current 14nm processors would continue to improve going forward, and that they would be fully resolved by the fourth quarter. The last we heard, this was expected to happen in the third quarter, but at any rate, Intel anticipates that inventory shortages will be a thing of the past before 2019 is out.

That’s good news certainly, as consumers and PC makers alike have been struggling when it comes to getting hold of Intel’s mid-range and particularly lower-end processors (as the chip giant has been concentrating on making high-end products with more profit margin using its limited production capacity).

Go to Source

How Intel’s Project Athena will shape the future of laptops

Intel is envisioning a bold new future for laptops with its Project Athena initiative, which the company first announced at CES 2019.

Project Athena will see Intel offer guidance to equipment and PC component manufacturers to create future laptops that are smarter, faster (thanks to 5G) and more power efficient – which will hopefully mean much longer battery lives.

This week, the wraps came off the plan, as Intel gathered with hundreds of members of the PC manufacturing industry in Taiwan. This is where one of the Intel’s Project Athena Open Labs is located – specifically in Taipei – with another in Folsom, California.

How Project Athena will work

Intel’s Project Athena Open Labs will work as a resource for equipment and component manufacturers to test their products. Intel has brought engineers with system-on-chip (SoC) and power optimization expertise to these Open Labs, and they’ll conduct testing and offer guidance to manufacturers on how to improve their products.

The goal is ensure a wide range of Project Athena-compliant components that original equipment manufacturers (OEMs) can depend on to work in more power-efficient laptops. Component manufacturers will submit their products to the Project Athena Open Labs to see if they can meet the specifications. These components can include everything from displays, audio devices, haptic feedback motors, SSDs, wireless modules and more.

By covering the whole range of components, the program should allow OEMS to easily choose components for devices that follow Project Athena specifications and more or less guarantee better power efficiency.

Project Athena is aimed toward 2020 and beyond, with future devices featuring 5G connectivity and AI in mind. But, we won’t have to wait until 2020 to get the first laptops offering Project Athena specifications.

The first batch of Project Athena devices is slated for the second half of this year. And, while Project Athena is focused on the components, not the complete computers, the news hasn’t said which chips will be at the heart of the experience. Intel’s Ice Lake processors could be coming at the end of the year, and with their 10nm architecture, they may be an ideal candidate for more power efficient laptops.

Go to Source

Virtual reality: the safest bet business can make today

The VR  and AR market is one of the most revolutionary tech that is going to expand from $6.1 Billion in 2016 to $192.7 Billion by 2022. Of course, there is a hidden potential that only a few are realizing and if experts are true, it is going to reshape the consumers completely.

The history of commercial consumption has been trying to gain and maintain the attention, and therefore the purchasing power of customers. Virtual reality (VR) or Augmented Reality (AR) offer new and increasingly attractive ways to win consumers’ attention and to transform the way they shop. 

While this two-dimensional method of reaching consumers is undoubtedly effective, we are looking towards VR/AR to significantly shift the ways that people shop, browse, and take in information. 

VR, AR and personalization

Although retail has been on the decline, Augmented Reality can allow companies to retain customers through a more personalized experience. Consumers increasingly expect businesses to anticipate and meet their needs which is why they are their customers in the first place, and AR can successfully allow them to do this in ways that other technologies simply can’t. 

Take the real estate industry for instance, when a consumer goes to buy an apartment, the builder presents him/her with a flashy 3D model that sits duck on a large table and provides next to none information about the apartment itself. Some builders even run an animated version of the apartments that doesn’t even look real at times.

But imagine a virtual tour? Just like playing games where gamers walk on a rail-bar placed at unfathomable heights, a buyer can, too, easily visit the home from inside, see the decor and placement of the furniture, how the kitchen is and a lot more. Which definitely increases the chance for the builder to make a sale.

Similarly, imagine having to try that gothic shirt you want or the new pair of Jordans and even the funky eyeglasses. The potential is real here and applications are endless given businesses learn to let their customers connect with product or service rather than wooing them with useless ads. As consumers develop along with technology, they will expect the places they shop to treat them like an individual, with unique shopping experiences and recommendations. 

Major retailers such as Target and Walmart have already been developing technology along these lines. Target unveiled an AR feature that allows customers to view 3D versions of desired products within their own home. Each product is scaled, allowing consumers to exactly visualize the way their living room or kitchen would look with a Target product should they choose to buy that product. 

Image credit: Shutterstock

Image credit: Shutterstock

(Image: © Image Credit: Jirsak / Shutterstock)

Providing a more personalized experience 

Personalization in any business will not be successful if the brand doesn’t make use of modern technologies big and small companies alike have started to realize the potential. Augmented Reality, for example, can provide clients with the desired entertainment experiences that enable clients to be more engaged with a brand. 

While previous consumers may have been more interested in the intersection of price and quality, the new consumer increasingly leans towards the experience behind the product. And they expect that experience to be personalized. They want to feel like they share the same set of ideals with the company, not simply that the company produces a satisfying product. The new entertainment experience that consumers desire can most effectively be delivered with the use of developing technologies, such as AR. 

While VR offers amazing potential in the realm of customer experience, it is going to impact consumer lives in other large ways. What we are about to witness, and what’s already beginning to happen is VR spreading its potential across an entirely new domain of industries, including education, healthcare, sports, recreation, and many more.

A series of studies performed by the American Society of Anesthesiologists showed that innovative, new VR technologies are likely to reduce children’s pre and post-surgical pain and anxiety.  In addition, the New York Times has used VR to enhance their ability to tell stories, allowing readers to view 360-degree videos alongside stories in order to create an enhanced reader connection. 

If a virtual reality-based experience is present in other walks of life, and delivering a high-quality experience in various realms, then consumers will naturally begin gravitating towards this experience when shopping as well. 

This stir that the technological world is feeling is also shaking the retail industry on a significantly larger scale. All the major players have jumped the bandwagon; Facebook took over Oculus, Samsung is putting efforts in its VR, Google trying to the victor the race already and invested over 500 million on its project Magic Leap, and Microsoft recently showcased it Hololens 2 in MWC 2019. But why so much for VR/AR? Well, it’s pretty simple, this particular can increase the efficiency of a business, bring down overall marketing cost and provide useful data and customer analytics. It’s a win-win for both consumer and business.

Looking to the future

While some people are cynical about VR and its ability to make positive change, there is no doubt that VR/AR is transforming various industries in radical ways. Customer service is one industry that will change with increasingly accessible and widely used AR/VR technologies. As consumers grow to expect more from their shopping experience, new technologies will need to be developed in order to fill that need. Consumers will come to and are already coming to, expect an experience based around their personal desires and ideals. Virtual Reality is the technology that will meet the new consumer’s demands.

Rafael Szymanski, Founder and CEO of Global Tech Maker 

Go to Source