Home › Forum Online Discussion › Practice › If we were really smart, we’d get over our fixation on the IQ test (article)
- This topic has 10 replies, 2 voices, and was last updated 6 years, 3 months ago by c_howdy.
-
AuthorPosts
-
June 20, 2018 at 8:43 am #52637c_howdyParticipant
Scores are falling across the world, provoking headlines of ‘dumbing down’. But what does it measure anyway?
By David Olusoga
https://www.theguardian.com/commentisfree/2018/jun/17/dumbing-down-or-need-better–smarter-measure
IQ tests have a troubled history. Although their inventor, the intellectually cautious Frenchman Alfred Binet, understood and acknowledged their limitations, many of those who went on to deploy and develop his ideas did not. Within years of their emergence, IQ tests were being used by US eugenicists to weed out the “feebleminded”, and by politicians keen to cloak their calls for greater racial segregation and changes to American immigration laws with a degree of scientific legitimacy.
From the start, Binet’s tests were also drawn into the debate over whether human intelligence is predominantly hereditary or better understood as a reflection of environmental factors such as education – one part of the sprawling nature v nurture debate.
Some of the problem with IQ tests stems from the inescapable reality that human intelligence is staggeringly complex and multifaceted. In one attempt to reflect this, modern researchers draw a distinction between crystallised intelligence, the fruits of learning and training, and fluid intelligence, the capacity of an individual to recognise patterns and problem-solve using logic.
Although modern IQ tests are much more sophisticated than those developed by Binet in the early 20th century, so many factors have been shown to apparently influence the results – everything from eating fish once a week to simply practising the types of question the tests are based on – that some reject the whole contention that IQ offers a meaningful, stable and reliable measure of intelligence.
Those of us outside these debates tend to be less sniffy about IQ tests, particularly when the dispatches from the frontline of intelligence research are flattering, as they were for much of the latter 20th century. The good news then was that, in the decades after the Second World War, improvements in measured intelligence were regularly recorded in various nations around the world. Each postwar decade saw an increase of around three points.
increase of around three points.
Advertisement
The trend is known as the Flynn effect, after Professor James R Flynn, a leading researcher. While the increases were warmly welcomed, the underlying reasons were never properly understood with multiple explanations being put forward. But in recent years, successive studies have delivered the sobering news that the boom years are over. We have passed peak Flynn, and test scores are falling.
The most recent study, published last week, was carried out by Ole Rogeberg and Bernt Bratsberg, of the Ragnar Frisch Centre for Economic Research in Oslo. Their work makes use of a ready-made data set, a huge sample of 730,000 18- to 19-year-old Norwegian men who did compulsory national service between 1970 and 2009. Part of their training and assessment involved taking standardised IQ tests.
Comparing the scores of successive generations of Norwegian draftees, Rogeberg and Bratsberg discovered that those born in 1991 scored about five points lower than those born in 1975, their fathers’ generation. The research even showed declines within families, with sons achieving lower scores than their fathers had managed earlier.
All this might be dismissed as a problem for the Norwegian army, were it not for studies in France, Denmark, Finland, the Netherlands and the UK that have all detected similar trends.
Some researchers suggest that this is merely the flatlining of an uptick. For much of the 20th century, people in the west have been getting taller – the average height of men in the Netherlands is now around 184cm, making them 19cm taller than their 19th-century ancestors. But no one expects this to continue indefinitely, until Dutch males are at risk of posing a threat to air traffic. Already, there are signs that similar height increases among Americans, recorded over the course of the 20th century, are drawing to an end.
he reasons for the decline in IQ scores are unknown and disputed. But the Norwegian researchers were quick to state that the probable causes were environmental rather than genetic, and, like most scientific papers, theirs concluded by calling for further research and careful consideration. To some sections of the British press and in the more rabid corners of social media, that sort of academic rigour was just not good enough. What they wanted was not intellectual caution but for science to legitimise the rehashing of old arguments and old political obsessions.
That the Oslo report coincides with a new series of Love Island has created the perfect tabloid storm. By skipping the complex and unhelpfully nuanced parts of the Norwegian report, it was possible, with a following wind and stiff drink, to take the research and blame a decline in IQ test results in the Norwegian army on child-focused teaching methods, calculators in maths exams and (for good measure) the internet and gaming. Had the findings come in 1978, the blame would have been apportioned to colour TV and the Sex Pistols.
But, as with so many of these stories, the most interesting questions that remain, after the political posturing is over, are about IQ tests themselves. As the Oslo academics acknowledge, what their research might indicate is not that Norwegians born in the 1990s are less intelligent than those born in the 1970s, but that a testing system designed more than a century ago may be approaching its sell-by date.
What needs to be debated is whether IQ tests, as currently designed, are fit for purpose, and capable of measuring the changing nature of intelligence in the 21st century among generations brought up with digital technology and different learning habits.
David Olusoga is a historian and broadcaster.
June 20, 2018 at 5:17 pm #52639rideforeverParticipantIf we were really smart we’d get over the destruction of the planet we live on.
If we were really smart we’d get over soaring rates of mental health illnesses.
If we were really smart we’d get over young people being incompetent to do anything but look at their phone.
etc…In the diving world, if you pass out whilst under water you would naturally float upwards to the surface.
Unless …. you are below a certain depth, and then you don’t rise you just fall into the abyss.Likewise, once a culture’s intelligence has fallen very low, then no-one remembers which way is up, and then the culture is in freefall to destruction.
This is that culture and the evidence is everywhere around you.June 28, 2018 at 6:24 pm #52665c_howdyParticipant‘Breakthrough’ algorithm exponentially faster than any previous one
June 28, 2018, Harvard John A. Paulson School of Engineering and Applied Scienceshttps://techxplore.com/news/2018-06-breakthrough-algorithm-exponentially-faster-previous.html
What if a large class of algorithms used today—from the algorithms that help us avoid traffic to the algorithms that identify new drug molecules—worked exponentially faster?
Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a completely new kind of algorithm, one that exponentially speeds up computation by dramatically reducing the number of parallel steps required to reach a solution.
The researchers will present their novel approach at two upcoming conferences: the ACM Symposium on Theory of Computing (STOC), June 25-29 and International Conference on Machine Learning (ICML), July 10 -15.
A lot of so-called optimization problems, problems that find the best solution from all possible solutions, such as mapping the fastest route from point A to point B, rely on sequential algorithms that haven’t changed since they were first described in the 1970s. These algorithms solve a problem by following a sequential step-by-step process. The number of steps is proportional to the size of the data. But this has led to a computational bottleneck, resulting in lines of questions and areas of research that are just too computationally expensive to explore.
“These optimization problems have a diminishing returns property,” said Yaron Singer, Assistant Professor of Computer Science at SEAS and senior author of the research. “As an algorithm progresses, its relative gain from each step becomes smaller and smaller.”
Singer and his colleague asked: what if, instead of taking hundreds or thousands of small steps to reach a solution, an algorithm could take just a few leaps?
“This algorithm and general approach allows us to dramatically speed up computation for an enormously large class of problems across many different fields, including computer vision, information retrieval, network analysis, computational biology, auction design, and many others,” said Singer. “We can now perform computations in just a few seconds that would have previously taken weeks or months.”
“This new algorithmic work, and the corresponding analysis, opens the doors to new large-scale parallelization strategies that have much larger speedups than what has ever been possible before,” said Jeff Bilmes, Professor in the Department of Electrical Engineering at the University of Washington, who was not involved in the research. “These abilities will, for example, enable real-world summarization processes to be developed at unprecedented scale.”
Traditionally, algorithms for optimization problems narrow down the search space for the best solution one step at a time. In contrast, this new algorithm samples a variety of directions in parallel. Based on that sample, the algorithm discards low-value directions from its search space and chooses the most valuable directions to progress towards a solution.
Take this toy example:
You’re in the mood to watch a movie similar to The Avengers. A traditional recommendation algorithm would sequentially add a single movie in every step which has similar attributes to those of The Avengers. In contrast, the new algorithm samples a group of movies at random, discarding those that are too dissimilar to The Avengers. What’s left is a batch of movies that are diverse (after all, you don’t want ten Batman movies) but similar to The Avengers. The algorithm continues to add batches in every step until it has enough movies to recommend.
This process of adaptive sampling is key to the algorithm’s ability to make the right decision at each step.
“Traditional algorithms for this class of problem greedily add data to the solution while considering the entire dataset at every step,” said Eric Balkanski, graduate student at SEAS and co-author of the research. “The strength of our algorithm is that in addition to adding data, it also selectively prunes data that will be ignored in future steps.”
In experiments, Singer and Balkanski demonstrated that their algorithm could sift through a data set which contained 1 million ratings from 6,000 users on 4,000 movies and recommend a personalized and diverse collection of movies for an individual user 20 times faster than the state-of-the-art.
The researchers also tested the algorithm on a taxi dispatch problem, where there are a certain number of taxis and the goal is to pick the best locations to cover the maximum number of potential customers. Using a data set of two million taxi trips from the New York City taxi and limousine commission, the adaptive-sampling algorithm found solutions 6 times faster.
“This gap would increase even more significantly on larger scale applications, such as clustering biological data, sponsored search auctions, or social media analytics,” said
Balkanski.
Of course, the algorithm’s potential extends far beyond movie recommendations and taxi dispatch optimizations. It could be applied to:
designing clinical trials for drugs to treat Alzheimer’s, multiple sclerosis, obesity, diabetes, hepatitis C, HIV and more
evolutionary biology to find good representative subsets of different collections of genes from large datasets of genes from different species
designing sensor arrays for medical imaging
identifying drug-drug interaction detection from online health forums
This process of active learning is key to the algorithm’s ability to make the right decision at each step and solves the problem of diminishing returns.“This research is a real breakthrough for large-scale discrete optimization,” said Andreas Krause, professor of Computer Science at ETH Zurich, who was not involved in the research. “One of the biggest challenges in machine learning is finding good, representative subsets of data from large collections of images or videos to train machine learning models. This research could identify those subsets quickly and have substantial practical impact on these large-scale data summarization problems.”
Singer-Balkanski model and variants of the algorithm developed in the paper could also be used to more quickly assess the accuracy of a machine learning model, said Vahab Mirrokni, a principal scientist at Google Research, who was not involved in the research.
“In some cases, we have a black-box access to the model accuracy function which is time-consuming to compute,” said Mirrokni. “At the same time, computing model accuracy for many feature settings can be done in parallel. This adaptive optimization framework is a great model for these important settings and the insights from the algorithmic techniques developed in this framework can have deep impact in this important area of machine learning research.”
Singer and Balkanski are continuing to work with practitioners on implementing the algorithm.
Provided by Harvard John A. Paulson School of Engineering and Applied Sciences
July 23, 2018 at 3:07 pm #52742c_howdyParticipantUsing non-invasive brain recordings to characterize activity in deep structures
July 16, 2018, Agency for Science, Technology and Research (A*STAR), Singapore
https://medicalxpress.com/news/2018-07-non-invasive-brain-characterize-deep.html
Many neurophysiological processes, such as memory, sensory perception and emotion, as well as diseases including Alzheimer’s, depression and autism, are mediated by brain regions located deep beneath the cerebral cortex. Techniques to non-invasively image millisecond-scale activity in these deep brain regions are limited. Now, an international team including A*STAR researchers has shown that magnetoencephalography (MEG) and electroencephalography (EEG) can be used to characterize fast timescale activity in these deep brain structures.
In a recent breakthrough, A*STAR’s Pavitra Krishnaswamy, in a research team spanning the United States, Sweden, and Finland, developed a statistical machine learning approachto resolve deep brain activity with high temporal and spatial resolution. The researchers used simulated test cases and experimental MEG/EEG recordings from healthy volunteers to demonstrate that their approach accurately maps out this deep brain activity amidst concurrent activity in cortical structures.
Deep brain activity typically generates weak MEG/EEG signals that are easily drowned out by louder signals arising from cortical activity. Therefore, characterizing the deep brain sources becomes akin to ‘picking out needles in a haystack’. Rather than solely relying on how ‘loud’ the brainwaves are, the team leveraged the fact that deep brain activity generates distinct spatial patterns across multiple MEG/EEG sensors positioned over the head.
The concept of sparsity—referring to how limited subsets of neurons across the brain ‘fire’ in sequential and coordinated patterns—also underpins the team’s research. If all of the cerebral cortex were active at the same instant, says Krishnaswamy, the cortical signal would “completely dwarf” that of the deep brain. However, when only a limited number of cortical regions are simultaneously active, it is possible to train an algorithm to see through into the deep brain. “When just a limited portion of the cortex is active, even though it appears louder than ongoing deep brain activity, it is possible to transform the data into a space where the deeper signals also have a distinct ‘voice’.”
Some of Krishnaswamy’s collaborators will now look to validate the approach for possible neuroscience and clinical applications. She will investigate ways to further develop such statistical machine learning approaches for adjacent applications in medical image analysis where the goals are to resolve low signal-to-noise features, enhance reconstruction quality and ultimately reduce diagnostic error.
More information: Pavitra Krishnaswamy et al. Sparsity enables estimation of both subcortical and cortical activity from MEG and EEG, Proceedings of the National Academy of Sciences (2017). DOI: 10.1073/pnas.1705414114
July 31, 2018 at 3:24 pm #52781c_howdyParticipantPast experiences shape what we see more than what we are looking at now
July 31, 2018, NYU Langone Healthhttps://medicalxpress.com/news/2018-07-past-experiences-shape-what-we.html
A rope coiled on dusty trail may trigger a frightened jump by hiker who recently stepped on a snake. Now a new study better explains how a one-time visual experience can shape perceptions afterward.
Led by neuroscientists from NYU School of Medicine and published online July 31 in eLife, the study argues that humans recognize what they are looking at by combining current sensory stimuli with comparisons to images stored in memory.
“Our findings provide important new details about how experience alters the content-specific activity in brain regions not previously linked to the representation of images by nerve cell networks,” says senior study author Biyu He, Ph.D., assistant professor in the departments of Neurology, Radiology, and Neuroscience and Physiology.
“The work also supports the theory that what we recognize is influenced more by past experiences than by newly arriving sensory input from the eyes,” says He, part of the Neuroscience Institute at NYU Langone Health.
She says this idea becomes more important as evidence mounts that hallucinations suffered by patients with post-traumatic stress disorder or schizophrenia occur when stored representations of past images overwhelm what they are looking at presently.
A key question in neurology is about how the brain perceives, for instance, that a tiger is nearby based on a glimpse of orange amid the jungle leaves. If the brains of our ancestors matched this incomplete picture with previous danger, they would be more likely to hide, survive and have descendants. Thus, the modern brain finishes perception puzzles without all the pieces.
Most past vision research, however, has been based on experiments wherein clear images were shown to subjects in perfect lighting, says He. The current study instead analyzed visual perception as subjects looked at black-and-white images degraded until they were difficult to recognize. Nineteen subjects were shown 33 such obscured “Mooney images—17 of animals and 16 manmade objects—in a particular order. They viewed each obscured image six times, then a corresponding clear version once to achieve recognition, and then blurred images again six times after. Following the presentation of each blurred image, subjects were asked if they could name the object shown.
As the subjects sought to recognize images, the researchers “took pictures” of their brains every two seconds using functional magnetic resonance images (fMRI). The technology lights up with increased blood flow, which is known to happen as brain cells are turned on during a specific task. The team’s 7 Tesla scanner offered a more than three-fold improvement in resolution over past studies using standard 3 Tesla scanners, for extremely precise fMRI-based measurement of vision-related nerve circuit activity patterns.
After seeing the clear version of each image, the study subjects were more than twice as likely to recognize what they were looking at when again shown the obscured version as they were of recognizing it before seeing the clear version. They had been “forced” to use a stored representation of clear images, called priors, to better recognize related, blurred versions, says He.
The authors then used mathematical tricks to create a 2-D map that measured, not nerve cell activity in each tiny section of the brain as it perceived images, but instead of how similar nerve network activity patterns were in different brain regions. Nerve cell networks in the brain that represented images more similarly landed closer to each other on the map.
This approach revealed the existence of a stable system of brain organization that processed each image in the same steps, and regardless of whether clear or blurry, the authors say. Early, simpler brain circuits in the visual cortex that determine edge, shape, and color clustered on one end of the map, and more complex, “higher-order” circuits known to mix past and present information to plan actions at the opposite end.
These higher-order circuits included two brain networks, the default-mode network (DMN) and frontoparietal network (FPN), both linked by past studies to executing complex tasks such as planning actions, but not to visual, perceptual processing. Rather than remaining stable in the face of all images, the similarity patterns in these two networks shifted as brains went from processing unrecognized blurry images to effortlessly recognizing the same images after seeing a clear version. After previously seeing a clear version (disambiguation), neural activity patterns corresponding to each blurred image in the two networks became more distinct from the others, and more like the clear version in each case.
Strikingly, the clear image-induced shift of neural representation towards perceptual prior was much more pronounced in brain regions with higher, more complex functions than in the early, simple visual processing networks. This further suggests that more of the information shaping current perceptions comes from what people have experienced before.
Journal reference: eLife
Provided by: NYU Langone Health
August 1, 2018 at 2:33 am #52783c_howdyParticipantA neural network that operates at the speed of light
July 27, 2018 by Bob Yirka, Tech Xplore
https://techxplore.com/news/2018-07-neural-network.html
A team of researchers at the University of California has developed a novel kind of neural network—one that uses light instead of electricity to arrive at results. In their paper published in the journal Science, the group describes their ideas, their working device, its performance, and the types of applications they believe could be well served by such a network.
Deep learning networks are computer systems that “learn” by looking at many examples of data types and then use patterns that develop as a way to make interpretations of new data. Like all other computers, they run on electricity. In this new effort, the researchers have found a way to create a deep learning network that does not use electricity at all—instead, it uses light. They call it a diffractive deep neural network, or more succinctly, D2NN.
To build such a network, the researchers created small plastic plates printed using a 3-D printer. Each plate represented a layer of virtual neurons—and each neuron could behave like its biological counterpart by either transmitting or reflecting incoming light. In their example, they used five plates lined up face-to-face with a small space between them. When the system was operating, light from a laser was directed at the first plate and made its way through to the second, third, fourth and fifth in a way that revealed information about an object placed in front of the device. A sensor at the back read the light and interpreted what was found.
To test their idea, the researchers chose to create a physical neural network able to recognize the numerals zero through nine, and then to report what it found. In practice, the system was shown a number on a display and responded by identifying the number and then displaying it using the sensor. The system was fed 55,000 images of numbers that had been scanned. This learning phase required the use of electricity as it ran on a computer that fed the system the data. In testing their system by showing it thousands of numbers, the researchers report that it was approximately 95 percent accurate. They note that their device was a proof of concept and could prove useful as a means of developing dedicated devices for applications that require speed—such as picking faces out of a crowd of moving people.
More information: Xing Lin et al. All-optical machine learning using diffractive deep neural networks, Science (2018). DOI: 10.1126/science.aat8084
August 6, 2018 at 6:14 pm #52801c_howdyParticipantParticle physicists team up with AI to solve toughest science problems
August 2, 2018 by Manuel Gnida, SLAC National Accelerator Laboratory
Experiments at the Large Hadron Collider (LHC), the world’s largest particle accelerator at the European particle physics lab CERN, produce about a million gigabytes of data every second. Even after reduction and compression, the data amassed in just one hour is similar to the data volume Facebook collects in an entire year – too much to store and analyze.
Luckily, particle physicists don’t have to deal with all of that data all by themselves. They partner with a form of artificial intelligence called machine learning that learns how to do complex analyses on its own.
A group of researchers, including scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Fermi National Accelerator Laboratory, summarize current applications and future prospects of machine learning in particle physics in a paper published today in Nature.
“Compared to a traditional computer algorithm that we design to do a specific analysis, we design a machine learning algorithm to figure out for itself how to do various analyses, potentially saving us countless hours of design and analysis work,” says co-author Alexander Radovic from the College of William & Mary, who works on the NOvA neutrino experiment.
To handle the gigantic data volumes produced in modern experiments like the ones at the LHC, researchers apply what they call “triggers” – dedicated hardware and software that decide in real time which data to keep for analysis and which data to toss out.
In LHCb, an experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70 percent of these decisions, says LHCb scientist Mike Williams from the Massachusetts Institute of Technology, one of the authors of the Nature summary. “Machine learning plays a role in almost all data aspects of the experiment, from triggers to the analysis of the remaining data,” he says.
Machine learning has proven extremely successful in the area of analysis. The gigantic ATLAS and CMS detectors at the LHC, which enabled the discovery of the Higgs boson, each have millions of sensing elements whose signals need to be put together to obtain meaningful results.
“These signals make up a complex data space,” says Michael Kagan from SLAC, who works on ATLAS and was also an author on the Nature review. “We need to understand the relationship between them to come up with conclusions, for example that a certain particle track in the detector was produced by an electron, a photon or something else.”
Neutrino experiments also benefit from machine learning. NOvA, which is managed by Fermilab, studies how neutrinos change from one type to another as they travel through the Earth. These neutrino oscillations could potentially reveal the existence of a new neutrino type that some theories predict to be a particle of dark matter. NOvA’s detectors are watching out for charged particles produced when neutrinos hit the detector material, and machine learning algorithms identify them.
Recent developments in machine learning, often called “deep learning,” promise to take applications in particle physics even further. Deep learning typically refers to the use of neural networks: computer algorithms with an architecture inspired by the dense network of neurons in the human brain.
These neural nets learn on their own how to perform certain analysis tasks during a training period in which they are shown sample data, such as simulations, and told how well they performed.
Until recently, the success of neural nets was limited because training them used to be very hard, says co-author Kazuhiro Terao, a SLAC researcher working on the MicroBooNE neutrino experiment, which studies neutrino oscillations as part of Fermilab’s short-baseline neutrino program and will become a component of the future Deep Underground Neutrino Experiment (DUNE) at the Long-Baseline Neutrino Facility (LBNF). “These difficulties limited us to neural networks that were only a couple of layers deep,” he says. “Thanks to advances in algorithms and computing hardware, we now know much better how to build and train more capable networks hundreds or thousands of layers deep.”
Many of the advances in deep learning are driven by tech giants’ commercial applications and the data explosion they have generated over the past two decades. “NOvA, for example, uses a neural network inspired by the architecture of the GoogleNet,” Radovic says. “It improved the experiment in ways that otherwise could have only been achieved by collecting 30 percent more data.”
Machine learning algorithms become more sophisticated and fine-tuned day by day, opening up unprecedented opportunities to solve particle physics problems.
Many of the new tasks they could be used for are related to computer vision, Kagan says. “It’s similar to facial recognition, except that in particle physics, image features are more abstract than ears and noses.”
Some experiments like NOvA and MicroBooNE produce data that is easily translated into actual images, and AI can be readily used to identify features in them. In LHC experiments, on the other hand, images first need to be reconstructed from a murky pool of data generated by millions of sensor elements.
“But even if the data don’t look like images, we can still use computer vision methods if we’re able to process the data in the right way,” Radovic says.
One area where this approach could be very useful is the analysis of particle jets produced in large numbers at the LHC. Jets are narrow sprays of particles whose individual tracks are extremely challenging to separate. Computer vision technology could help identify features in jets.
Another emerging application of deep learning is the simulation of particle physics data that predict, for example, what happens in particle collisions at the LHC and can be compared to the actual data. Simulations like these are typically slow and require immense computing power. AI, on the other hand, could do simulations much faster, potentially complementing the traditional approach.
“Just a few years ago, nobody would have thought that deep neural networks can be trained to ‘hallucinate’ data from random noise,” Kagan says. “Although this is very early work, it shows a lot of promise and may help with the data challenges of the future.”
Despite all obvious advances, machine learning enthusiasts frequently face skepticism from their collaboration partners, in part because machine learning algorithms mostly work like “black boxes” that provide very little information about how they reached a certain conclusion.
“Skepticism is very healthy,” Williams says. “If you use machine learning for triggers that discard data, like we do in LHCb, then you want to be extremely cautious and set the bar very high.”
Therefore, establishing machine learning in particle physics requires constant efforts to better understand the inner workings of the algorithms and to do cross-checks with real data whenever possible.
“We should always try to understand what a computer algorithm does and always evaluate its outcome,” Terao says. “This is true for every algorithm, not only machine learning. So, being skeptical shouldn’t stop progress.”
Rapid progress has some researchers dreaming of what could become possible in the near future. “Today we’re using machine learning mostly to find features in our data that can help us answer some of our questions,” Terao says. “Ten years from now, machine learning algorithms may be able to ask their own questions independently and recognize when they find new physics.”
More information: Alexander Radovic et al. Machine learning at the energy and intensity frontiers of particle physics, Nature (2018). DOI: 10.1038/s41586-018-0361-2
Journal reference: NatureProvided by: SLAC National Accelerator Laboratory
August 9, 2018 at 3:47 pm #52808c_howdyParticipantNordic nations, North Americans and Antipodeans rank top in navigation skills
August 9, 2018, University College Londonhttps://phys.org/news/2018-08-nordic-nations-north-americans-antipodeans.html
People in Nordic countries, North America, Australia, and New Zealand have the best spatial navigational abilities, according to a new study led by UCL and the University of East Anglia.
Researchers assessed data from over half a million people in 57 countries who played a specially-designed mobile game, which has been developed to aid understanding into spatial navigation, a key indicator in Alzheimer’s disease.
With so many people taking part the team were able to reveal that spatial navigation ability across all countries declines steadily across adulthood. However, a country’s GDP also had a significant bearing with Nordic countries among the highest performing along with those in North America, Australia, and New Zealand.
And men performed better than women, but the gender gap narrowed in countries with greater gender equality, according to the study published today in Current Biology.
The paper is the first publication of findings from a collaborative project led by Deutsche Telekom, using the mobile game ‘Sea Hero Quest’, which is seeking to establish benchmarks in navigation abilities in order to help dementia research.
“We’ve found that the environment you live in has an impact on your spatial navigation abilities,” said the study’s lead author, Professor Hugo Spiers.”We’re continuing to analyse the data and hope to gain a better understanding of why people in some countries perform better than others.”
The research team has so far collected data from over four million people who have played Sea Hero Quest. In the mobile game, people play as a sea explorer completing a series of wayfinding tasks.
For the current study, the researchers restricted the data to those who had provided their age, gender and nationality, and were from countries with at least 500 participants.
As part of their analysis, the research team corrected for video game ability by comparing participants’ main results to their performance in tutorial levels which assessed aptitude with video games.
While age most strongly correlated with navigational performance, researchers also found that country wealth, as measured by GDP (gross domestic product), correlated with performance. The researchers say this relationship may be due to associations with education standards, health and ability to travel. They focused on GDP for this analysis as it was a standard metric available for every country, but as part of the ongoing research project they will follow up with further comparisons of other factors.
The researchers also speculate about more specific factors. Top performing countries including Finland, Denmark, and Norway all share a national interest in orienteering, a sport relying on navigation, while the other top performing countries—New Zealand, Canada, the United States, and Australia—all have high rates of driving, which may also boost navigation ability.
Comparing the country-level results to the World Economic Forum’s Gender Gap Index, the researchers found a correlation between country-wide gender inequality and a larger male advantage in spatial navigation ability. The gender gap in game performance was also smaller in countries with greater economic wealth.”Our findings suggest that sex differences in cognitive abilities are not fixed, but instead are influenced by cultural environments, such as the role of women in society,” said study co-author Dr. Antoine Coutrot, who completed the research at UCL before moving to the French National Centre for Scientific Research (CNRS).Gender equality has previously been found to eliminate differences in maths performance in school. The current study is the first to connect gender inequality to a more specific cognitive measure.
The researchers say that, in the future, an adapted version of the game, first launched by Deutsche Telekom in 2016, may be used as a screening tool for an early warning sign of dementia, as well as a means to monitor disease progress and as an outcome measure for clinical trials.
“Standard current tests for dementia don’t effectively tap into the primary initial symptom of being disoriented in space, so we are trying to find an easy way of measuring that, efficiently validated by crowd-sourcing our data,” said Professor Spiers.
“It’s promising to see that the effect of nationality is relatively small, as it suggests that the game could be used as a relatively universal test for spatial navigation abilities,” said co-lead author Professor Michael Hornberger (The University of East Anglia).
The study was conducted by researchers at UCL, the University of East Anglia, McGill University, Bournemouth University, ETH Zurich and Northumbria University.The data collected by Sea Hero Quest is stored in a secure T-Systems server in Germany and all analysis by the UCL and UEA-led team is conducted on entirely anonymous data. From September 2018, Deutsche Telekom will be providing access to this unprecedented data set in order to aid future discoveries, not only in dementia but across the wider field of neuroscience research. Access will be granted via a bespoke, secure web portal, allowing cloud based analysis of the data. It is facilitated by T-Systems.
“We are excited to announce that we will be making this unique data set accessible to researchers across the world to continue to support studies of this kind,” said Deutsche Telekom Chief Brand Officer, Hans-Christian Schwingen. “Sea Hero Quest demonstrates the power of mobile technology in helping to collect important data at scale, advancing research into some of the most pressing healthcare issues of our time.”
“The data from Sea Hero Quest is providing an unparalleled benchmark for how human navigation varies and changes across age, location and other factors. The ambition is to use these data insights to inform the development of more sensitive diagnostic tools for diseases like Alzheimer’s, where navigational abilities can be compromised early on. With such a huge number of people participating in Sea Hero Quest, this really is only the beginning of what we might learn about navigation from this powerful analysis,” said Tim Parry, Director at Alzheimer’s Research UK, which funded the analysis.
More information: Antoine Coutrot et al, Global Determinants of Navigation Ability, Current Biology (2018). DOI: 10.1016/j.cub.2018.06.009
Journal reference: Current Biology
Provided by: University College London
August 13, 2018 at 4:08 pm #52820c_howdyParticipantIn neuropsychology, linguistics, and the philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languages can take different forms, such as speech or signing. They are distinguished from constructed and formal languages such as those used to program computers or to study logic.
-https://en.wikipedia.org/wiki/Natural_language-
Can a computer write a sonnet as well as Shakespeare?
August 8, 2018, University of Toronto
https://techxplore.com/news/2018-08-sonnet-shakespeare.html
August 29, 2018 at 4:36 pm #52898c_howdyParticipantBodily sensations give rise to conscious feelings
August 29, 2018, University of Turku
https://medicalxpress.com/news/2018-08-bodily-sensations-conscious.html
Humans constantly experience an ever-changing stream of subjective feelings that is only interrupted during sleep and deep unconsciousness. Finnish researches show how the subjective feelings map into five major categories: positive emotions, negative emotions, cognitive functions, somatic states, and illnesses. All these feelings were imbued with strong bodily sensations.
“These results show that conscious feelings stem from bodily feedback. Although consciousness emerges due to brain function and we experience our consciousness to be “housed” in the brain, bodily feedback contributes significantly to a wide variety of subjective feelings,” says Associate Professor Lauri Nummenmaa from Turku PET Centre.
According to the researchers, emotions vividly colour all our feelings as pleasant or unpleasant. It is possible that during evolution, consciousness has originally emerged to inform the organisms and others around about tissue damage and well-being. This development may have paved for the emergence of language, thinking and reasoning.
“Subjective well-being is an important determinant of our prosperity, and pain and negative emotionsare intimately linked with multiple somatic and psychological illnesses. Our findings help to understand how illnesses and bodily states in general influence our subjective well-being. Importantly, they also demonstrate the strong embodiment of cognitive and emotional states,” says Nummenmaa.
The study was conducted in the form of an online questionnaire in which more than 1,000 people participated. The participants first evaluated a total of 100 feeling states in terms of how much they are experienced in the body and mind, and how emotional and controllable they are. Next, they also evaluated how similar the feelings are with respect to each other, and whereabouts in the body they are felt.
More information: L. Nummenmaa et al. Bodily maps of emotions, Proceedings of the National Academy of Sciences (2013). DOI: 10.1073/pnas.1321664111
Journal reference: Proceedings of the National Academy of Sciences
Provided by: University of Turku
August 29, 2018 at 7:13 pm #52899c_howdyParticipantIf words are the building blocks of language most writers are content to arrange the blocks neatly into phrases and sentences to build books and plays. Not Shakespeare. Rather than play nicely with the set of blocks he’s given, he changes the blocks themselves-gluing on bits, slicing off chunks, and nailing them together in clever and imaginative ways.
-SCOTT KAISER Shakespeare’s Wordcraft
-
AuthorPosts
- You must be logged in to reply to this topic.