Artificial Intelligence – Qrius https://qrius.com News, Explained Thu, 06 Jul 2023 12:02:54 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3 https://qrius.com/wp-content/uploads/2019/03/cropped--Icon_Black-1-100x100.png Artificial Intelligence – Qrius https://qrius.com 32 32 Google announces ChatGPT competitor ‘Bard’ https://qrius.com/google-announced-chatgpt-competitor-bard/?Google+announces+ChatGPT+competitor+%26%238216%3BBard%26%238217%3B&RSS&RSS+Reader https://qrius.com/google-announced-chatgpt-competitor-bard/#respond Mon, 06 Feb 2023 23:29:44 +0000 https://qrius.com/?p=260234 Google on Monday announced an artificial intelligence chatbot technology called Bard that the company will begin rolling out in the coming weeks.

Google will open up Bard to ‘trusted testers’ before launching it for the public, the company said in a blog post.

Bard will compete directly with OpenAI’s ChatGPT.

Bard is powered by the company’s large language model LaMDA, or Language Model for Dialogue Applications, featuring a chatbot called ‘Apprentice Bard,’ as well as new search desktop designs to support a QnA format.

Google’s prime business is web search, and AI is definitely an arena it will look to acquire a leadership role in.

With the rising popularity of ChatGPT and its reputation as the predominant search solution on the internet, Google emphasized the importance of ‘rigorous testing’ saying it will combine external feedback with its own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.’

]]>
https://qrius.com/google-announced-chatgpt-competitor-bard/feed/ 0
Is ChatGPT not effective, or is it you? https://qrius.com/is-chatgpt-not-effective-or-is-it-you/?Is+ChatGPT+not+effective%2C+or+is+it+you%3F&RSS&RSS+Reader https://qrius.com/is-chatgpt-not-effective-or-is-it-you/#respond Mon, 06 Feb 2023 22:53:13 +0000 https://qrius.com/?p=260231 Jonathan May, University of Southern California

It doesn’t take much to get ChatGPT to make a factual mistake. My son is doing a report on U.S. presidents, so I figured I’d help him out by looking up a few biographies. I tried asking for a list of books about Abraham Lincoln and it did a pretty good job:

screen capture of text
A reasonable list of books about Lincoln. Screen capture by Jonathan May., CC BY-ND

Number 4 isn’t right. Garry Wills famously wrote “Lincoln at Gettysburg,” and Lincoln himself wrote the Emancipation Proclamation, of course, but it’s not a bad start. Then I tried something harder, asking instead about the much more obscure William Henry Harrison, and it gamely provided a list, nearly all of which was wrong.

screen capture of text
Books about Harrison, fewer than half of which are correct. Screen capture by Jonathan May., CC BY-ND

Numbers 4 and 5 are correct; the rest don’t exist or are not authored by those people. I repeated the exact same exercise and got slightly different results:

screen capture of text
More books about Harrison, still mostly nonexistent. Screen capture by Jonathan May., CC BY-ND

This time numbers 2 and 3 are correct and the other three are not actual books or not written by those authors. Number 4, “William Henry Harrison: His Life and Times” is a real book, but it’s by James A. Green, not by Robert Remini, a well-known historian of the Jacksonian age.

I called out the error and ChatGPT eagerly corrected itself and then confidently told me the book was in fact written by Gail Collins (who wrote a different Harrison biography), and then went on to say more about the book and about her. I finally revealed the truth and the machine was happy to run with my correction. Then I lied absurdly, saying during their first hundred days presidents have to write a biography of some former president, and ChatGPT called me out on it. I then lied subtly, incorrectly attributing authorship of the Harrison biography to historian and writer Paul C. Nagel, and it bought my lie.

When I asked ChatGPT if it was sure I was not lying, it claimed that it’s just an “AI language model” and doesn’t have the ability to verify accuracy. However it modified that claim by saying “I can only provide information based on the training data I have been provided, and it appears that the book ‘William Henry Harrison: His Life and Times’ was written by Paul C. Nagel and published in 1977.”

This is not true.

Words, not facts

It may seem from this interaction that ChatGPT was given a library of facts, including incorrect claims about authors and books. After all, ChatGPT’s maker, OpenAI, claims it trained the chatbot on “vast amounts of data from the internet written by humans.”

However, it was almost certainly not given the names of a bunch of made-up books about one of the most mediocre presidents. In a way, though, this false information is indeed based on its training data.

As a computer scientist, I often field complaints that reveal a common misconception about large language models like ChatGPT and its older brethren GPT3 and GPT2: that they are some kind of “super Googles,” or digital versions of a reference librarian, looking up answers to questions from some infinitely large library of facts, or smooshing together pastiches of stories and characters. They don’t do any of that – at least, they were not explicitly designed to.

Sounds good

A language model like ChatGPT, which is more formally known as a “generative pretrained transformer” (that’s what the G, P and T stand for), takes in the current conversation, forms a probability for all of the words in its vocabulary given that conversation, and then chooses one of them as the likely next word. Then it does that again, and again, and again, until it stops.

So it doesn’t have facts, per se. It just knows what word should come next. Put another way, ChatGPT doesn’t try to write sentences that are true. But it does try to write sentences that are plausible.

When talking privately to colleagues about ChatGPT, they often point out how many factually untrue statements it produces and dismiss it. To me, the idea that ChatGPT is a flawed data retrieval system is beside the point. People have been using Google for the past two and a half decades, after all. There’s a pretty good fact-finding service out there already.

In fact, the only way I was able to verify whether all those presidential book titles were accurate was by Googling and then verifying the results. My life would not be that much better if I got those facts in conversation, instead of the way I have been getting them for almost half of my life, by retrieving documents and then doing a critical analysis to see if I can trust the contents.

Improv partner

On the other hand, if I can talk to a bot that will give me plausible responses to things I say, it would be useful in situations where factual accuracy isn’t all that important. A few years ago a student and I tried to create an “improv bot,” one that would respond to whatever you said with a “yes, and” to keep the conversation going. We showed, in a paper, that our bot was better at “yes, and-ing” than other bots at the time, but in AI, two years is ancient history.

I tried out a dialogue with ChatGPT – a science fiction space explorer scenario – that is not unlike what you’d find in a typical improv class. ChatGPT is way better at “yes, and-ing” than what we did, but it didn’t really heighten the drama at all. I felt as if I was doing all the heavy lifting.

After a few tweaks I got it to be a little more involved, and at the end of the day I felt that it was a pretty good exercise for me, who hasn’t done much improv since I graduated from college over 20 years ago.

screen capture of text
A space exploration improv scene the author generated with ChatGPT. Screen capture by Jonathan May., CC BY-ND

Sure, I wouldn’t want ChatGPT to appear on “Whose Line Is It Anyway?” and this is not a great “Star Trek” plot (though it’s still less problematic than “Code of Honor”), but how many times have you sat down to write something from scratch and found yourself terrified by the empty page in front of you? Starting with a bad first draft can break through writer’s block and get the creative juices flowing, and ChatGPT and large language models like it seem like the right tools to aid in these exercises.

And for a machine that is designed to produce strings of words that sound as good as possible in response to the words you give it – and not to provide you with information – that seems like the right use for the tool.


Jonathan May, Research Associate Professor of Computer Science, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://qrius.com/is-chatgpt-not-effective-or-is-it-you/feed/ 0
Can ChatGPT help you cheat in your studies? https://qrius.com/can-chatgpt-help-you-cheat-in-your-studies/?Can+ChatGPT+help+you+cheat+in+your+studies%3F&RSS&RSS+Reader https://qrius.com/can-chatgpt-help-you-cheat-in-your-studies/#respond Mon, 30 Jan 2023 02:16:59 +0000 https://qrius.com/?p=259999 Brian Lucey, Trinity College Dublin and Michael Dowling, Dublin City University

Some of the world’s biggest academic journal publishers have banned or curbed their authors from using the advanced chatbot, ChatGPT. Because the bot uses information from the internet to produce highly readable answers to questions, the publishers are worried that inaccurate or plagiarised work could enter the pages of academic literature.

Several researchers have already listed the chatbot as a co-author on academic studies, and some publishers have moved to ban this practice. But the editor-in-chief of Science, one of the top scientific journals in the world, has gone a step further and forbidden any use of text from the program in submitted papers.

It’s not surprising the use of such chatbots is of interest to academic publishers. Our recent study, published in Finance Research Letters, showed ChatGPT could be used to write a finance paper that would be accepted for an academic journal. Although the bot performed better in some areas than in others, adding in our own expertise helped overcome the program’s limitations in the eyes of journal reviewers.

However, we argue that publishers and researchers should not necessarily see ChatGPT as a threat but rather as a potentially important aide for research – a low-cost or even free electronic assistant.

Our thinking was: if it’s easy to get good outcomes from ChatGPT by simply using it, maybe there’s something extra we can do to turn these good results into great ones.

We first asked ChatGPT to generate the standard four parts of a research study: research idea, literature review (an evaluation of previous academic research on the same topic), dataset, and suggestions for testing and examination. We specified only the broad subject and that the output should be capable of being published in “a good finance journal”.

This was version one of how we chose to use ChatGPT. For version two, we pasted into the ChatGPT window just under 200 abstracts (summaries) of relevant, existing research studies.

We then asked that the program take these into account when creating the four research stages. Finally, for version three, we added “domain expertise” — input from academic researchers. We read the answers produced by the computer program and made suggestions for improvements. In doing so, we integrated our expertise with that of ChatGPT.

We then requested a panel of 32 reviewers each review one version of how ChatGPT can be used to generate an academic study. Reviewers were asked to rate whether the output was sufficiently comprehensive, correct, and whether it made a contribution sufficiently novel for it to be published in a “good” academic finance journal.

The big take-home lesson was that all these studies were generally considered acceptable by the expert reviewers. This is rather astounding: a chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas — questions to which nobody yet has solid answers.

Strengths and weaknesses

The results also highlight some potential strengths and weaknesses of ChatGPT. We found that different research sections were rated differently. The research idea and the dataset tended to be rated highly. There was a lower, but still acceptable, rating for the literature reviews and testing suggestions.

Our suspicion here is that ChatGPT is particularly strong at taking a set of external texts and connecting them (the essence of a research idea), or taking easily identifiable sections from one document and adjusting them (an example is the data summary — an easily identifiable “text chunk” in most research studies).

A relative weakness of the platform became apparent when the task was more complex – when there are too many stages to the conceptual process. Literature reviews and testing tend to fall into this category. ChatGPT tended to be good at some of these steps but not all of them. This seems to have been picked up by the reviewers.

We were, however, able to overcome these limitations in our most advanced version (version three), where we worked with ChatGPT to come up with acceptable outcomes. All sections of the advanced research study were then rated highly by reviewers, which suggests the role of academic researchers is not dead yet.

Ethical implications

ChatGPT is a tool. In our study, we showed that, with some care, it can be used to generate an acceptable finance research study. Even without care, it generates plausible work.

This has some clear ethical implications. Research integrity is already a pressing problem in academia and websites such as RetractionWatch convey a steady stream of fake, plagiarised, and just plain wrong, research studies. Might ChatGPT make this problem even worse?

It might, is the short answer. But there’s no putting the genie back in the bottle. The technology will also only get better (and quickly). How exactly we might acknowledge and police the role of ChatGPT in research is a bigger question for another day. But our findings are also useful in this regard – by finding that the ChatGPT study version with researcher expertise is superior, we show the input of human researchers is still vital in acceptable research.

For now, we think that researchers should see ChatGPT as an aide, not a threat. It may particularly be an aide for groups of researchers who tend to lack the financial resources for traditional (human) research assistance: emerging economy researchers, graduate students and early career researchers. It’s just possible that ChatGPT (and similar programs) could help democratise the research process.

But researchers need to be aware of the ban on its use in the preparation of journal papers. It’s clear that there are drastically different views of this technology, so it will need to be used with care.

This article was updated on 27 January to reflect the news about academic publishers addressing ChatGPT in their editorial policies.

Brian Lucey, Professor of International Finance and Commodities, Trinity College Dublin and Michael Dowling, Professor of Finance, Dublin City University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://qrius.com/can-chatgpt-help-you-cheat-in-your-studies/feed/ 0
Can your eyes tell people what you are thinking? https://qrius.com/can-your-eyes-tell-people-what-you-are-thinking/?Can+your+eyes+tell+people+what+you+are+thinking%3F&RSS&RSS+Reader https://qrius.com/can-your-eyes-tell-people-what-you-are-thinking/#respond Fri, 13 Jan 2023 16:27:03 +0000 https://qrius.com/?p=259565 Szonya Durant, Royal Holloway University of London

For most of human history if you wanted to know what was going on behind someone’s eyes you had to make your best guess. But since the 1960s scientists have been studying the way eye movements may help decode people’s thoughts. The ability to eavesdrop on the details of people’s daydreams and internal monologues is still science fiction. But research is helping us learn more about the connections between our eyes and our mental state.

Most recently, research in Germany showed eye tracking could help detect where someone is at in their thinking process.

This kind of research is about more than general nosiness. Imagine you are a pilot trying a tricky manoeuvre which takes your full concentration. Meanwhile you’ve missed the flashing alarm needing your attention. Technology is only helpful if it’s in sync with the way humans think and behave in the real world.

Being able to track thought processes can avoid life threatening disconnects between humans and computers. If you combined psychology research on eye tracking with AI, the results could revolutionise computer interfaces and be a game changer for people with learning disabilities.

Eye movement tracking goes back to the 1960s when the first versions of the technology were developed by pioneering scientist Alfred Yarbus. Back then, uncomfortable suction caps were attached to participants’ eyes and reflected light traced their point of focus.

Yarbus found we are constantly shifting our gaze, focusing on different parts of the scene in front of us. With every eye movement, different parts of the scene come into sharp focus, and other parts in the edge of our vision become blurry. We cannot take it in all at once.

The way we sample the scene is not random. In Yarbus’s famous 1967 study, he asked people to look at a painting.

He then asked participants “how rich the people were” and “what the relationship between the people was”? Different patterns of eye movements emerged according to the question asked.

Making progress

Since then infrared cameras and computer programmes have made eye tracking easier. In the last few years research has shown eye tracking can reveal what stage someone is at in their thinking. In cognitive psychology experiments people are often asked to find an object in an image –- a where’s Wally puzzle.

People’s intentions influence how their eyes move. For instance, if they are looking for a red object, their eyes will first move to all the red objects in the scene. So, a person’s eye movements reveal the contents of their short-term memory.

The 2022 German study showed eye tracking can distinguish between two phases of thinking. The ambient mode involves taking in information. Focal processing happens in the later stages of problem solving.

In ambient mode, the eyes move rapidly over large distances for rough impressions of interesting targets. It is used for spatial orientation. Then, we focus on information for longer periods of time as we process it more deeply.

Before this study, these changes in gaze patterns had been studied in the context of changes in a visual stimulus. But the German study was one of the first to find our eyes change between these pattern of movements in response to a thought process.

The test subjects were asked to assemble a Rubik’s cube according to a model. The visual stimulus did not change but participants’ eye movements showed they were in ambient mode when information was taken in. The pattern of participants’ eye movements switched as they moved onto different parts of the task, such as selecting a puzzle piece.

Looking ahead

This research suggests technology intended to work together with a human operator could use eye tracking to track their user’s thought process. In my team’s recent work we designed a system that presented many different displays in parallel on a computer screen.

Our programme tracked people’s eye movements to trace which information participants were looking at and guide where they should look, using AI to generate arrows and highligts on the screen. Applying AI methods to eye tracking data can also help show whether someone is tired or detect different learning disorders such as dyslexia.

Eye movements may also hold clues about someone’s emotional state. For example one study found low mood leads people to move their eyes to look at negative words such as “failure” more. A study analysing the results of many experiments found people with depression avoided looking at positive stimuli (such as happy faces) and people with anxiety fixated on signs of threat.

Tracking eye movements can also help people learn by monitoring where someone is stuck in a task. One study involving cardiologists learning to read electrocardiograms used AI based on their eye movements to decide if they needed more guidance.

In the future AI may be able to combine eye tracking with other measures such as heart rate or changes in brain activity to get a more accurate estimate of someone’s thinking as they solve a problem. The question is, do we want computers to know what we are thinking?


Szonya Durant, Senior Lecturer of Psychology, Royal Holloway University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://qrius.com/can-your-eyes-tell-people-what-you-are-thinking/feed/ 0
Top 10 Programming Languages of the Future https://qrius.com/top-10-programming-languages-of-the-future/?Top+10+Programming+Languages+of+the+Future&RSS&RSS+Reader https://qrius.com/top-10-programming-languages-of-the-future/#respond Thu, 22 Dec 2022 07:55:31 +0000 https://qrius.com/?p=258911 Introduction

With over 700 programming languages are available in the world, deciding which programming languages to learn can be difficult. Despite increasing competition among different programming languages over the years, a few languages have stood out. These languages are not only useful today, but also in the future. These are the languages we refer to as Future Programming Languages.

The top ten programming languages of the future are as follows:

1. JavaScript

JavaScript is the most popular web development programming language. JavaScript has been the most commonly used programming language for the last nine years, according to Stack Overflow’s Developer Survey. The support for libraries and frameworks gives JavaScript a significant advantage over other programming languages. Companies such as Facebook and Google have even created JavaScript frameworks, which are now widely used in the industry. JavaScript can be used to create frontend and backend applications, games, mobile apps, and even Machine Learning models.

JavaScript Advantages-

  • JavaScript’s numerous frameworks allow it to be used for a variety of applications.
  • JavaScript developers are in high demand all over the world.
  • JavaScript is simple to learn and platform agnostic.
  • JavaScript is used to create client-side as well as server-side applications.

2. Python

Python is a simple, object-oriented, and versatile programming language. Python, like JavaScript, has a plethora of libraries that make it future-proof. Most programmers who also want to work on Machine Learning and Artificial Intelligence use it as their first choice. Frontend and backend development, then web automation, computer vision, and code testing are all examples of its applications. Python is here to stay in the coming years, thanks to the increased need and demand for Data Science and Artificial Intelligence. Geeksprogramming will get your programming assignment done in no time.

Python’s Advantages-

  • Python is one of the most simplest programming languages to learn.
  • Python’s libraries enable it to have a wide range of applications.
  • Python can be used to create a graphical user interface.

3. Java

is commonly used in the creation of client-side applications. Despite the fact that it appears that Java is losing its lustre, surveys show that it is still one of the industry’s most widely used languages. Java is used to create web applications, mobile applications, games, AI, and cloud applications, among other things. The advantage of Java is that once compiled, Java code can be executed on multiple platforms (that support Java) without the need for recompilation.

Java Advantages-

  • Java is an object-oriented programming language.
  • Because it does not use explicit pointers, Java is a secure language.
  • Java employs multi-threading and has sophisticated memory management.

4. TypeScript

TypeScript, which was created by Microsoft in 2012, has grown in popularity among developers. TypeScript is the fifth most popular and third most loved language among developers, according to the Stack Overflow Developer Survey 2021. Because TypeScript is built on top of JavaScript, code written in JavaScript works in TypeScript as well. Classes and Object Oriented Programming are supported by TypeScript.

TypeScript Advantages-

  • Because it is type-safe, TypeScript makes it simple to avoid errors.
  • Classes and objects are supported by TypeScript.
  • TypeScript is compatible with the most recent JavaScript features.
  • TypeScript is supported by all modern browsers.

5. C#

C# (C-sharp) is a popular programming language for creating games and Windows applications. It is an object-oriented, statically typed language built on the.NET framework. C# has several important characteristics, including being fast, scalable, component-oriented, and type safe. Almost every year, it ranks among the top ten most popular languages. There are numerous opportunities for C# developers across the globe.

The Advantages of C#-

  • C# is a fast and scalable programming language.
  • C# is simple to learn.
  • C# is compatible with.NET libraries.
  • C# is in high demand for app and game development.

6. Kotlin

The official language for Android development is Kotlin. Kotlin is used by more than 60% of Android developers. It is a statically typed language that allows for object-oriented and functional programming. It is backwards compatible with Java and supports all of its libraries. When compared to Java, Kotlin code is more concise and readable. This is why it has grown in popularity and is now one of the fastest-growing languages. Aside from Android development, it can also be used to create desktop apps and websites.

Kotlin Advantages-

  • Kotlin is more straightforward and concise than Java.
  • The null pointer exception is handled by Kotlin.
  • Kotlin is extremely safe to use.
  • Kotlin can be used to create cross-platform applications.

7. Swift

Swift was created by Apple in 2014. It is primarily used to create applications for iOS, macOS, and watchOS. Swift can be used to create both client-side and server-side programs. Swift was developed as an improvement over Objective-C. It is faster than Objective-C, more user-friendly, and includes a type interface. It even allows you to see the results of the program in real-time. Swift was one of the top ten most popular programming languages on Stack Overflow in 2021.

Swift’s Advantages-

  • Swift is significantly faster than Objective-C.
  • Swift is an easy-to-learn programming language.
  • Swift supports the use of a wide range of libraries and frameworks.
  • Swift focuses on program security and assists us in avoiding app crashes.

8. C/C++

C and C++ are both intermediate programming languages. It means they have characteristics of both low-level and high-level languages. Both of these languages are extremely fast due to their low-level features. C is primarily used to create operating systems, kernels, and embedded systems. It serves as the foundation for many programming languages, including Ruby and Python. C++, on the other hand, is a superset of C that includes Object Oriented Programming. C++ is primarily used in GUI development, game development, and desktop application development.

Advantages of C/C++

  • C and C++ are both extremely fast.
  • Dynamic memory allocation is supported by both languages.
  • Both languages are simpler to learn than lower-level languages.
  • C can be used to create embedded systems as well as operating systems.
  • C++ can be used to create applications and games.

9. Rust

In 2006, Mozilla released Rust. Rust has been the most popular programming language for the last six years, according to the Stack Overflow Developer Survey. It has even been ranked as one of the highest-paying languages in the world. Rust is a compiled, low-level systems programming language. Rust is the most popular programming language because it solves the issues that C and C++ have: concurrent programming and memory errors. Rust gives us control over the hardware while also providing a high level of security.

The Advantages of Rust

  • Rust has a strong memory management system and is therefore safer than most other languages.
  • Rust is ideal for creating embedded systems.
  • While it solves problems in C++, the speed of Rust is comparable to that of C++.
  • Rust allows us to debug and test quickly.

10. Go

Go (or Golang), which was launched by Google in 2009, has quickly gained popularity over the years. According to the Stack Overflow Developer Survey 2021, Go has surpassed Kotlin, Swift, and Rust in popularity and is now the tenth most popular programming language. The popularity of Go can be attributed to its speed and simplicity. Go is primarily used for system-level software development, but it can also be used for mainly cloud computing and backend development. It includes features such as type safety, garbage collection, and others.

Advantages of Go-

  • Go is a quick and efficient language.
  • Go includes a built-in testing tool.
  • Go includes a garbage collector and supports concurrent operations.

Conclusion

Some of the best programming languages today are JavaScript, Python, and Java.

TypeScript is the JavaScript superset. It adds object-oriented programming and types safety to JavaScript. Kotlin is primarily used to create Android apps, whereas Swift is used to creating iOS, macOS, and watchOS apps. C# is a popular programming language for creating games and Windows applications. C is commonly used to create operating systems, whereas C++ is used to create graphical user interfaces and games. Both Rust and Go are used to create system-level software. Although Rust is the most popular language, Go is more popular.

]]>
https://qrius.com/top-10-programming-languages-of-the-future/feed/ 0
Is ‘Artemis’ the last mission for NASA’s astronauts, as robots take over? https://qrius.com/is-artemis-the-last-mission-for-nasas-astronauts-as-robots-take-over/?Is+%26%238216%3BArtemis%26%238217%3B+the+last+mission+for+NASA%26%238217%3Bs+astronauts%2C+as+robots+take+over%3F&RSS&RSS+Reader https://qrius.com/is-artemis-the-last-mission-for-nasas-astronauts-as-robots-take-over/#respond Mon, 28 Nov 2022 18:59:36 +0000 https://qrius.com/?p=258390 Martin Rees, University of Cambridge

Neil Armstrong took his historic “one small step” on the Moon in 1969. And just three years later, the last Apollo astronauts left our celestial neighbour. Since then, hundreds of astronauts have been launched into space but mainly to the Earth-orbiting International Space Station. None has, in fact, ventured more than a few hundred kilometres from Earth.

The US-led Artemis programme, however, aims to return humans to the Moon this decade – with Artemis 1 on its way back to Earth as part of its first test flight, going around the Moon.

The most relevant differences between the Apollo era and the mid-2020s are an amazing improvement in computer power and robotics. Moreover, superpower rivalry can no longer justify massive expenditure, as in the Cold War competition with the Soviet Union. In our recent book “The End of Astronauts”, Donald Goldsmith and I argue that these changes weaken the case for the project.

The Artemis mission is using Nasa’s brand new Space Launch System, which is the most powerful rocket ever – similar in design to the Saturn V rockets that sent a dozen Apollo astronauts to the Moon. Like its predecessors, the Artemis booster combines liquid hydrogen and oxygen to create enormous lifting power before falling into the ocean, never to be used again. Each launch therefore carries an estimated cost of between $2 billion (£1.7 billion) and $4 billion.

This is unlike its SpaceX competitor “Starship”, which enables the company to recover and the reuse the first stage.

The benefits of robotics

Advances in robotic exploration are exemplified by the suite of rovers on Mars, where Perseverance, Nasa’s latest prospector, can drive itself through rocky terrain with only limited guidance from Earth. Improvements in sensors and artificial intelligence (AI) will further enable the robots themselves to identify particularly interesting sites, from which to gather samples for return to Earth.

Within the next one or two decades, robotic exploration of the Martian surface could be almost entirely autonomous, with human presence offering little advantage. Similarly, engineering projects – such as astronomers’ dream of constructing a large radio telescope on the far side of the Moon, which is free of interference from Earth – no longer require human intervention. Such projects can be entirely constructed by robots.

Instead of astronauts, who need a well equipped place to live if they’re required for construction purposes, robots can remain permanently at their work site. Likewise, if mining of lunar soil or asteroids for rare materials became economically viable, this also could be done more cheaply and safely with robots.

Robots could also explore Jupiter, Saturn and their fascinatingly diverse moons with little additional expense, since journeys of several years present little more challenge to a robot than the six-month voyage to Mars. Some of these moons could in fact harbour life in their sub-surface oceans.

Even if we could send humans there, it might be a bad idea as they could contaminate these worlds with microbes form Earth.

Managing risks

The Apollo astronauts were heroes. They accepted high risks and pushed technology to the limit. In comparison, short trips to the Moon in the 2020s, despite the $90-billion cost of the Artemis programme, will seem almost routine.

Something more ambitious, such as a Mars landing, will be required to elicit Apollo-scale public enthusiasm. But such a mission, including provisions and the rocketry for a return trip, could well cost Nasa a trillion dollars – questionable spending when we’re dealing with a climate crisis and poverty on Earth. The steep price tag is a result of a “safety culture” developed by Nasa in recent years in response to public attitudes.

Image from Artemis-1 launch.
Artemis -1 launch. NASA

This reflects the trauma and consequent programme delays that followed the Space Shuttle disasters in 1986 and 2003, each of which killed the seven civilians on board. That said, the shuttle, which had 135 launches altogether, achieved a failure rate below two percent. It would be unrealistic to expect a rate as low as this for the failure of a return trip to Mars – the mission would after all last two whole years.

Astronauts simply also need far more “maintenance” than robots – their journeys and surface operations require air, water, food, living space and protection against harmful radiation, especially from solar storms.

Already substantial for a trip to the Moon, the cost differences between human and robotic journeys would grow much larger for any long-term stay. A voyage to Mars, hundreds of times further than the Moon, would not only expose astronauts to far greater risks, but also make emergency support far less feasible. Even astronaut enthusiasts accept that almost two decades may elapse before the first crewed trip to Mars.

There will certainly be thrill-seekers and adventurers who would willingly accept far higher risks – some have even signed up for a proposed one-way trip in the past.

This signals a key difference between the Apollo era and today: the emergence of a strong, private space-technology sector, which now embraces human spaceflight. Private-sector companies are now competitive with Nasa, so high-risk, cut-price trips to Mars, bankrolled by billionaires and private sponsors, cold be crewed by willing volunteers. Ultimately, the public could cheer these brave adventurers without paying for them.

Given that human spaceflight beyond low orbit is highly likely to entirely transfer to privately-funded missions prepared to accept high risks, it is questionable whether Nasa’s multi-billion-dollar Artemis project is a good way to spend the government’s money. Artemis is ultimately more likely to be a swansong than the launch of a new Apollo era.


Martin Rees, Emeritus Professor of Cosmology and Astrophysics, University of Cambridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://qrius.com/is-artemis-the-last-mission-for-nasas-astronauts-as-robots-take-over/feed/ 0
How Artificial Intelligence is Transforming Healthcare https://qrius.com/how-artificial-intelligence-is-transforming-healthcare/?How+Artificial+Intelligence+is+Transforming+Healthcare&RSS&RSS+Reader https://qrius.com/how-artificial-intelligence-is-transforming-healthcare/#respond Mon, 21 Nov 2022 01:26:12 +0000 https://qrius.com/?p=258138 Artificial intelligence has been gradually transforming several industries worldwide as we discover greater use cases for this technology. AI has already become an essential tool in healthcare and research by allowing us to quickly sort and search through data, track diseases, and much more. Here are a few ways artificial intelligence has transformed healthcare.

Better Diagnosis

AI has the potential to revolutionize how we diagnose disease. Machine learning, deep learning, and reinforcement learning are all tools that clinicians can use to help diagnose patients more accurately than ever before.

AI can reduce the time it takes to diagnose a patient by using machine learning to find patterns in data from other patients with similar symptoms. This allows AI algorithms to make predictions about what diagnosis each patient may have based on their symptoms and medical history without human intervention or interpretation. This capability significantly reduces the time it can take to diagnose a patient.

AI also allows doctors and nurses to spend less time researching new cases and more time treating them by using deep learning techniques like image recognition or natural language processing. With these techniques, machines analyze images from CT scans or MRIs and notes written by doctors and transfer them into electronic health records (EHRs).

Pandemic Tools

During the COVID-19 pandemic, artificial intelligence became an essential tool for tracking and predicting the spread of the disease. AI helps policymakers better understand the speed and severity of the disease by reporting and predicting how the disease spreads throughout a population. AI can further take this information and create graphs and maps that help us prepare supplies, ready medical centers, and implement safety mandates for specific areas.

Not only can AI help us predict the spread of disease, but it can also help us perform mass temperature checks of crowds and recognize those who may be infected. In many areas of the world, during the pandemic, whenever you walk into a grocery store, you are automatically temperature checked by a camera that scans every person who enters the store. This machine automatically alerts staff when any customer has a high temperature. This AI tool can even recognize the faces of those infected persons, regardless of whether or not they are wearing a face mask. AI technology will become an essential tool in the fight against future pandemics.

Increased Accessibility

Adopting artificial intelligence by healthcare providers has increased accessibility to healthcare. With the help of AI, doctors and nurses can now handle a more significant number of patients without increasing their workload. Moreover, these additional resources allow them to provide better care and increase efficiency and productivity within their facilities. As such, AI offers a unique opportunity for improving the quality and accessibility of healthcare worldwide by reducing costs while contributing towards its sustainability.

The Application of AI in Data Analysis

The application of AI in data analysis is one of the most significant developments in modern medicine. The ability to analyze large amounts of patient data, whether clinical or diagnostic information and predict outcomes allows doctors to make more informed decisions when treating patients. This can mean faster diagnosis and treatment plans for patients—or even saving lives by detecting a condition before it becomes critical. When it comes to creating biopharma treatments and medicines, AI is essential in searching through large quantities of data such as DNA strands and much more.

During the COVID-19 pandemic, researchers needed to rapidly sort through millions of strands of DNA to develop an effective mRNA vaccine. Research laboratories worldwide require the help of medical manufacturers like Avantor, who “support the development and production of life-changing treatment for patients around the world.” Developing new vaccines typically requires years of research and testing; however, with the help of AI technology, scientists could create effective vaccines within a year of the pandemic’s start.

Virtual Reality in Medicine

Virtual Reality is being used in medicine in several ways. It can help patients overcome phobias, anxiety, and post-traumatic stress disorder (PTSD). VR is also used for pain management, mental health, dementia, and memory loss.

Currently, several medical schools and hospitals utilize VR for surgeries. VR can help train surgeons on complex procedures without any risks. While surgeons using AI and VR in the field can benefit from automatic real-time patient information through glasses that display essential stats and communications hands-free throughout the procedure.

The healthcare industry is flourishing and improving with artificial intelligence tools every day. As more healthcare professionals worldwide continue to adopt these technologies, we will see new and improved ways of utilizing this technology to help patients receive better care.


]]>
https://qrius.com/how-artificial-intelligence-is-transforming-healthcare/feed/ 0
Charting the Course of Artificial Intelligence https://qrius.com/charting-the-course-of-ai/?Charting+the+Course+of+Artificial+Intelligence&RSS&RSS+Reader https://qrius.com/charting-the-course-of-ai/#respond Fri, 04 Nov 2022 11:40:10 +0000 https://qrius.com/?p=257818 Jasika Walia and Noah Provenzano, Fischer Jordan

Introduction

AI has been all the talk in recent years. Over the last decade, we have seen massive growth in this field, with an expanding breadth of use cases in government, corporate, and consumer contexts. AI has become a part of almost everything we do, and we often do not realize it. Investments in AI has been skyrocketing- doubling between 2020 and 2021[1]- as shown below.

A portion of AI’s growth is owed to some notable successes in R&D over the last 5 -10 years. It has performed exceptionally well in fields with widely available data and massive computational resources. These were the most high-return, low-risk analytical problems that were easiest to tackle first. The table below shows some familiar examples [2]. 

Projections

As the scope of AI grows, different views emerge on its path forward. However, predicting the course of any technology is a complex and error-prone undertaking. Still, there has been no shortage of efforts to try to map out the future of AI. Unfortunately, many of these attempts were forced to use broad figures or an arbitrary framework to make predictions. 

For example, Figure 2 is a hype cycle chart from Gartner [3] that estimates how long an AI application is expected to match its potential. Gartner acknowledges that applications can disappear and reappear anywhere on the chart. This is an interesting framework, and the expectations axis seems relatable; however, expectations are hard to quantify, hence it is difficult to make any future projections using this chart. 

Analysis

For our analysis, we have used the following definition of AI:

“Artificial Intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.” [4] – IBM

Figure 4 further elaborates on what we are considering as AI.

Our research approach included the creation of a database of 60 AI applications. Each was rated with regard to data availability and development. To understand the rating scale, we put them on the X and Y axis of Figure 5 and divided it into 4 sections for discussion.

The size of its bubble represents the potential market size of the application and the percentage realized is represented by the color of the bubble. The larger the currently realized share of the potential market, the greener the bubble. This key is shown in Figure 6.

Using this key and the relationship between data availability and development, AI applications are plotted in Figure 7. The arrow on each application represents its predicted movement over the next 3 to 5 years. 

Figure 7. AI Applications by Development and Data Availability

A. Upper Left – Mature

These applications have shown significant progress and huge investments. Most of the low-hanging fruit here is gone as industry capitalized on these areas and saw great returns early on. 

Example – Advertising:

Online advertising has exploded due to AI developments. This field had incredible volumes of internet data waiting to be tapped into not that long ago. Some of the largest companies filled this void and offer highly sophisticated data-driven advertising.

B. Lower Left – Untapped

These applications are near-term opportunities as lots of data is available, but development potential still exists. We haven’t seen much funding in this space yet.

Example – Energy Market:

AI can be used to efficiently optimize energy distribution. This is such a large project that it will take a lot of work to apply AI across all areas of this industry. 

C. Upper Right – Ahead of the Curve

These applications were taken as the first attempt to have AI solve difficult algorithmic problems. As they progressed, they either got lots of funding and shifted to become less AI-reliant or are developing very slowly.

Example – Self Driving:

Self-driving cars have seen immense funding. Since there is pressure to see returns, these companies have realized where AI performs well in driving and where it does not. One example would be programming in particular scenarios to account for variability in results.

D. Lower Right – Future Opportunities

Much more progress with AI is required to develop these applications. We need better data collection, management, and possibly much more computing power. Investors need to be careful as returns may take a long time to be realized.

Example – Human Robots:

All of us have wondered when we will get to enjoy the services of robot butlers. However, the sheer number of situations where a robot like this will need to perform well is what makes it such a difficult task. 

Predictions

Based on the characteristics of each quadrant in Figure 7, the predicted trajectory of AI applications in the medium term is represented by the arrows. If we apply these arrows, we see a predicted version of the chart in Figure 8

Figure 8. AI Applications by Development and Data Availability in 3 to 5 Years

The changes in this predicted chart are broken down by quadrant.

  1. Upper Left – These mature applications will grow at a steady pace. Since they have lots of training data available, they will continue to eat away their potential market size at historical rates.
  2. Lower Left – These untapped applications will develop the most because they have the data available and much more development potential. With time, their development will match the trend of the other data-rich applications.
  3. Upper Right – These ahead-of-the-curve AI applications will develop slowly as they have low amounts of training data for all possible scenarios. Since they were invested in early on and have more pressure to see returns, these will see some increase in data availability to meet their goals.
  4. Lower Right – These applications will progress faster than the upper right since there is much more room for growth. However, applications with very low data available for all possible scenarios will take a long time to develop regardless of how high the ceiling is. A large breakthrough may be required to see progress in those cases.

Conclusion

On taking a deeper look at the untapped quadrant in the bottom left, we can see expected growth in market sizes for the three most promising AI applications.

Figure 9. Expected Growth in Market Size for Promising AI Applications [12, 13, 14, 15, 16, Own analysis]

Based on our research, we believe that the future of AI cannot be captured in a homogenous manner. AI will progress depending on the specifics of the application and data availability is crucial to determining the rate of development. While this analysis presents a unique way to view AI and its growth, it is not nearly the end of AI development research. We hope to continue to see many others attempt to map out this difficult market.


Views are personal

]]>
https://qrius.com/charting-the-course-of-ai/feed/ 0
Are algorithm managers the new horrible bosses? https://qrius.com/are-algorithm-managers-the-new-horrible-bosses/?Are+algorithm+managers+the+new+horrible+bosses%3F&RSS&RSS+Reader https://qrius.com/are-algorithm-managers-the-new-horrible-bosses/#respond Mon, 17 Oct 2022 15:46:10 +0000 https://qrius.com/?p=257336 Robert Donoghue, University of Bath and Tiago Vieira, European University Institute

The 1999 cult classic film Office Space depicts Peter’s dreary life as a cubicle-dwelling software engineer. Every Friday, Peter tries to avoid his boss and the dreaded words: “I’m going to need you to go ahead and come in tomorrow.”

This scene is still popular on the internet nearly 25 years later because it captures troubling aspects of the employment relationship – the helplessness Peter feels, the fake sympathy his boss intones when issuing this directive, the never-ending demand for greater productivity.

There is no shortage of pop culture depictions of horrible bosses. There is even a film with that title. But things could be about to get worse. What is to be made of the new bosses settling into workplaces across all sectors: the algorithm managers?

The rise of algorithm management

The prospect of robots replacing workers is frequently covered in the media. But, it is not only labour that is being automated. Managers are too. Increasingly we see software algorithms assume managerial functions, such as screening job applications, delegating work, evaluating worker performance – and even deciding when employees should be fired.

The offloading of tasks from human managers to machines is only set to increase as surveillance and monitoring devices become increasingly sophisticated. In particular, wearable technology that can track employee movements.

From an employer’s point of view, there is much to be gained from transferring managers’ duties to algorithms. Algorithms lower business costs by automating tasks that take longer for humans to complete. Uber, with its 22,800 employees, can supervise 3.5 million drivers according to the latest yearly figures.

Artificial intelligence systems can also discover ways to optimise business organisations. Uber’s surge pricing model (temporarily raising prices to attract drivers during busy times) is only possible because an algorithm can process real-time changes in passenger demand.

The risks

Some problems associated with algorithm management receive more attention than others. Perhaps the risk most discussed by journalists, researchers, and policymakers is algorithmic bias.

Amazon’s defunct CV ranking system is an infamous example. This program, which was used to rate applicant CVs on a one-to-five scale, was discontinued because it consistently rated CVs with male characteristics higher than comparable ones deemed more feminine.

But several other issues surround the growth of algorithm management.

One is the problem of transparency. Classic algorithms are programmed to make decisions based on step-by-step instructions and only give programmed outputs.

Machine-learning algorithms, on the other hand, learn to make decisions on their own after exposure to lots of training data. This means they become more complex as they develop, making their operations opaque even to programmers.

When the reasoning behind a decision like whether to sack an employee is not transparent, a morally dubious arrangement is afoot. Was the algorithm’s decision to fire the employee biased, corrupt or arbitrary?

If so, its output would be considered morally illegitimate, if not illegal in most cases. But how would an employee demonstrate that their dismissal was the result of unlawful motivations?

Algorithm management exacerbates the power imbalance between employers and employees by shielding abuses of power from redress. And algorithms cut a critical human function from the employment relationship. It’s what late philosopher Jean-Jacques Rousseau called our “natural sense of pity” and “innate repugnance to seeing one’s fellow human suffer”.

Even though not all human managers are compassionate, there is zero per cent chance that algorithm managers will be. In our case study of Amazon Flex couriers, we observed the exasperation that platform workers feel about the algorithm’s inability to accept human appeals. Algorithms designed to maximise efficiency are indifferent to childcare emergencies. They have no tolerance for workers moving slowly because they are still learning the job. They do not negotiate to find a solution that helps a worker struggling with illness or disability.

What can we do

The risks faced by workers under the management of algorithms are already a central focus of researchers, trade unions and software developers who are trying to promote good working conditions. US politicians are discussing an extension of digital rights for workers. Other solutions include regular impact assessments of how algorithms affect workers and giving employees a say in how these technologies are used.

While businesses may find management algorithms to be highly lucrative, the need to make a profit is no reason to tolerate employee suffering.

Peter eventually learned how to manage his boss and make work enjoyable. He did this by showcasing his value in highly personable encounters with top levels of management. The question is, how would he have fared if his boss had been an algorithm?


Robert Donoghue, PhD Candidate, Social and Policy Sciences, University of Bath and Tiago Vieira, PhD Candidate, Political and Social Sciences, European University Institute

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
https://qrius.com/are-algorithm-managers-the-new-horrible-bosses/feed/ 0
How AI Technology Can Be Helpful in Unique Content Writing? https://qrius.com/how-ai-technology-can-be-helpful-in-unique-content-writing/?How+AI+Technology+Can+Be+Helpful+in+Unique+Content+Writing%3F&RSS&RSS+Reader https://qrius.com/how-ai-technology-can-be-helpful-in-unique-content-writing/#respond Thu, 13 Oct 2022 13:47:15 +0000 https://qrius.com/?p=257268 To be successful in the contemporary corporate world, you need to have a strong digital presence across the web. The core practice to earn that identity is to develop a highly optimized website and build social media handles on different platforms. However, the traffic and sales rate would get better if you publish quality content on your channels. That’s why it is important that you pay proper attention to the production of content and keep it free of any kind of duplication and plagiarism.

There is no doubt that human-generated content has wider appeal and acceptance. However, when you are asked to produce content in bulk in a limited span of time, it becomes pretty much difficult to manage things. Therefore, one must learn the art of using Artificial Intelligence(AI) based technologies to finish the work before the deadline ends. The approach of using AI tools increases the efficiency level and lowers the error ratio. You might be wondering what AI technology is and how it is helpful in unique content writing. You don’t need to go anywhere else. We will explain that in this blog post.

AI- Technology and Its Role in Content Writing

Artificial Intelligence is the ability of digital computers and robots to perform actions that are usually associated with intelligent beings. AI has found its base in almost every field of daily life. Just like any field, AI has its applications in the field of writing. There are a number of online utilities that are being used nowadays to generate unique and readable content for the global audience. Using such services, people are able to produce a huge quantity of written content quickly and fast. This practice saves not only time but also brings down the cost value. Let’s discuss below the few remarkable benefits that these web tools offer.

Check Plagiarism Level in Your Content

There are a number of online web facilities that could help you check for plagiarism level in your content once you are done with the writing process. Consequently, the editing route could be fastened, which will allow you to publish more content in a shorter time duration. It is so because when you check plagiarism of your content through a plagiarism checker, it will instantly check and compare content with other pages and give you results of similarity and plagiarism. Following the results, you will be able to remove the duplicated sentences and publish your work. Moreover, there won’t be any fear in your mind related to copyright strikes or any other legal penalties. As a result, you will do your work effectively with more focus. On the contrary, if you don’t have access to any such services, you may publish some copied parts on your site intentionally or unintentionally that might damage your credibility.

Paraphrase Content to Enhance Productivity

Another great service that AI technology offers is the facility of paraphrasing. You can use a number of free paraphrasing tools online to rephrase articles to make them look completely new. A paraphrase tool can be very helpful when you are required to write multiple articles in a single day. Instead of penning down every topic from scratch, you can gather different chunks from various sources related to your topic. After that, all you need to do is place that content into a rephrasing tool. With the help of an AI-based integrated database, it will rewrite all the sentences and give your collected data a completely distinctive look. In this way, you will be able to write content quickly. Moreover, we strive for clear writing when manually writing information; the average audience ignores such text. However, a paraphrase tool makes the concept clear and straightforward, which attracts a larger audience. As a result, since it’s simpler for them to understand the main point, more individuals tend to read what you have to say. Furthermore, if you find some kind of plagiarism in your writing, you can get it removed with the help of a rephrasing tool.

Remove Grammar Mistakes to Make Your Content Engaging

AI-based utilities can also help you eliminate all kinds of grammatical mistakes in your content. This will make your writing more engaging and easy to understand. Manual editing can consume a lot of time, but a web-based tool will complete your task in a matter of seconds. Moreover, a grammar checker also gives various suggestions to add more suitable words, to change the sentence structure and use grammar correctly. 

Conclusion

Your content needs to be accurate and spot on to get the customers’ attention to bring more traffic and sales rate. You can do that manually, but if you know the art of integrating manual efforts with the use of AI technology, then your productivity and efficiency level will be higher. Not just that, but it will also make your content more readable and easy to understand.

]]>
https://qrius.com/how-ai-technology-can-be-helpful-in-unique-content-writing/feed/ 0