to your HTML Add class="sortable" to any table you'd like to make sortable Click on the headers to sort Thanks to many, many people for contributions and suggestions. Licenced as X11: http://www.kryogenix.org/code/browser/licence.html This basically means: do what you want with it. */ var stIsIE = /*@cc_on!@*/false; sorttable = { init: function() { // quit if this function has already been called if (arguments.callee.done) return; // flag this function so we don't do the same thing twice arguments.callee.done = true; // kill the timer if (_timer) clearInterval(_timer); if (!document.createElement || !document.getElementsByTagName) return; sorttable.DATE_RE = /^(\d\d?)[\/\.-](\d\d?)[\/\.-]((\d\d)?\d\d)$/; forEach(document.getElementsByTagName('table'), function(table) { if (table.className.search(/\bsortable\b/) != -1) { sorttable.makeSortable(table); } }); }, makeSortable: function(table) { if (table.getElementsByTagName('thead').length == 0) { // table doesn't have a tHead. Since it should have, create one and // put the first table row in it. the = document.createElement('thead'); the.appendChild(table.rows[0]); table.insertBefore(the,table.firstChild); } // Safari doesn't support table.tHead, sigh if (table.tHead == null) table.tHead = table.getElementsByTagName('thead')[0]; if (table.tHead.rows.length != 1) return; // can't cope with two header rows // Sorttable v1 put rows with a class of "sortbottom" at the bottom (as // "total" rows, for example). This is B&R, since what you're supposed // to do is put them in a tfoot. So, if there are sortbottom rows, // for backwards compatibility, move them to tfoot (creating it if needed). sortbottomrows = []; for (var i=0; i
Last year, we inaugurated our annual celebration of The Biggest Math Story of the Year, where we sought out the most significant story to rock the world of mathematics in 2014. This year, we're back to consider the biggest math-related stories in 2015 that have the real potential to impact our everyday lives!
That's an important distinction to make, because otherwise, we'd have to consider stories like Homer Simpson's discovery of the Higgs Boson, which became news in 2015, some two years after the particle itself was confirmed to exist when someone finally watched an old episode of The Simpsons and realized that the cartoon character had done the math back in 1998.
That distinction also affects the ongoing story of Shinichi Mochizuki's extraordinarily complex proof of the abc Conjecture from back in 2012, which has yet to be substantiated. The proof is so challenging, in fact, that it now occupies the status of being the biggest mystery in mathematics! If and when Mochizuki's proof is ever substantiated, or even if it comes up short, that story could very well be the biggest math story of the year. But for now, all we know is that it is not the biggest math story of 2015.
As for what that story is, well, it could have quite a bit to do with the intersection of the worlds of math, robotics and artificial intelligence, where stories of Google's accident prone self-driving cars could soon be shaping the near term future of everyday life and commerce.
But the rise of automation technology is an old story - one that's been told now for centuries. Just because computer processing and sensor technology has advanced to the point where we can successfully program computers to outperform human drivers or even movie critics doesn't necessarily make that story the biggest math story of the year.
Unless you've been living under a rock, you probably heard the term "income inequality" at some point of 2015. What you likely haven't heard is that a team of researchers at Tel-Aviv University developed a mathematical model that suggests that the increase in wealth inequality in recent years might be able to be reversed.
For many Americans, the single biggest problem facing the country is the growing wealth inequality. Based on income tax data, wealth inequality in the US has steadily increased since the mid-1980s, with the top 10% of the population currently owning about 73% of the country's wealth. In a new paper published in PLOS ONE, researchers have quantitatively analyzed several of the major factors that affect wealth inequality dynamics, and found that the most crucial factor associated with the recent surge in wealth inequality since the '80s has been the dramatic decrease in personal savings, followed closely by a large increase in the dominance of capital income over labor income.
Taking these findings a step further, the researchers showed in their model that reversing these two trends can prevent and even reverse a further increase in wealth inequality in the future. The researchers hope that the findings will lead to policies that reproduce these results in the real world. But progress in this area may not even have to rely solely on policy changes, as the researchers note that the 2008 financial crisis has caused Americans to save more money, potentially bringing an opportunity to restrain some of the growth in wealth inequality.
Problem solved, right? Just a little prolonged financial crisis will fix all that's wrong in the world with income and wealth inequality!
Aside from sloppy thinking ("here, you'll be completely cured if you completely swallow this poison-filled pill that will fill your remaining life with pain"), the problem here is that the Israeli researchers may not have modeled the problem properly, leading them to an incorrect conclusion. A new paper by Dean Baker rejects the capital-over-labor income as the major driver of inequality, finding instead that outsize economic "rents" within the labor income earned via patents and copyrights, within the financial sector, within the limited pool of CEOs and executive and more broadly by professionals in general are responsible for much of the perceived increase in income inequality.
And since it's not clear at this point which story is the right one, the Israeli researchers' "solution" to income inequality is nowhere near close to being the biggest math story of the year.
Nor are the elaborate mathematical equations that were developed by the European Union's bureaucrats for determining how many migrants from the Middle East that each nation of the EU would be required to take in. While the inundation of European nations by millions of migrants is indeed the biggest news story of 2015, the truth is that the math failed, as many nations within Europe had their limited resources overwhelmed, forcing them to rethink the commitments they made to accept them.
A better candidate for the biggest math story of the year is the work of four mathematicians that promises to bridge number theory and geometry. Here, Xinyi Yuan, Wei Zhang, Zhiwei Yun and Xinwen Zhu combined to address the arithmatic fundamental lemma.
In December 2014, Zhang flew from New York to the West Coast, where he saw Yun and Yuan. The reason for the trip was a 60th-birthday conference at the Mathematical Sciences Research Institute in Berkeley for the Columbia mathematician Michael Harris, but Zhang also arrived with an idea he wanted to share with his friends. That idea had grown out of a conversation he’d had with Yun back in 2011. At that time, Yun had been thinking about work Zhang had done even earlier on a problem in the Langlands program known as the arithmetic fundamental lemma. Yun thought that some of those ideas could be combined with techniques from algebraic geometry, but he told Zhang he wasn’t sure if it was possible....
They left the conversation there for several years. Then in 2014, Zhang realized that Yun’s intuition was correct, and he began to see what it would take to prove it. The problem at hand involved L-functions, which Zhang had studied in graduate school. L-functions have what’s known as a Taylor expansion, in which they can be expressed as a sum of increasing powers. In 1986 Benedict Gross and Don Zagier were able to calculate the first term in the series.
Although L-functions were initially purely objects of number theory, they can also have a geometric interpretation, and powerful techniques from algebraic geometry can be used to study them. Yun had guessed that every term in the Taylor expansion should have a geometric interpretation; Zhang was able to precisely define what such an interpretation would look like. Whereas Gross and Zagier (and the French mathematician Jean-Loup Waldspurger) had been able to obtain exact formulas for the first and second term in the expansion, the new work would show how to obtain a geometric formula for every term.
[...]
The result still has to go through peer review, but it is already generating excitement in the math world. Among other implications, it opens a whole new window onto the famed Birch and Swinnerton-Dyer conjecture, which is one of the seven Millennium Prize Problems that carry a $1 million award for whoever solves them first.
From a practical perspective, should the proof hold and be useful in unlocking the Birch and Swinnerton-Dyer Conjecture, it will have an impact on building better cryptographic and coding systems.
In a really perverse way however, sometimes the knowledge that you cannot solve a problem is extremely useful, which brings us to our next math story, where a mathematical paradox makes a physics problem unanswerable.
A logical paradox at the heart of mathematics and computer science turns out to have implications for the real world, making a basic question about matter fundamentally unanswerable.
In 1931, Austrian-born mathematician Kurt Gödel shook the academic world when he announced that some statements are ‘undecidable’, meaning that it is impossible to prove them either true or false. Three researchers have now found that the same principle makes it impossible to calculate an important property of a material — the gaps between the lowest energy levels of its electrons — from an idealized model of its atoms.
The result also raises the possibility that a related problem in particle physics — which has a US$1-million prize attached to it — could be similarly unsolvable, says Toby Cubitt, a quantum-information theorist at University College London and one of the authors of the study.
The finding, published on 9 December in Nature, and in a longer, 140-page version on the arXiv preprint server2, is “genuinely shocking, and probably a big surprise for almost everybody working on condensed-matter theory”, says Christian Gogolin, a quantum information theorist at the Institute of Photonic Sciences in Barcelona, Spain.
So what does knowing that a problem is "undecidable" do for a mathematician?
For starters, because it specifically applies to the deductive approach being taken to make some sort of determination in solving a problem, it allows the mathematician to dispose of the approach as a dead end rather than continue wasting their efforts upon it. Knowing with certainty what's not going to work then can be especially valuable in directing their work to more viable approaches.
That's close to being the biggest math story of the year, but we'll admit that we're biased in preferring stories that end with solutions being obtained over stories where they're not. And that brings us to the next story, in which it appears that the graph isomorphism problem may be on the verge of moving from being an almost insoluable NP problem to being a very solvable P problem instead.
A theoretical computer scientist has presented an algorithm that is being hailed as a breakthrough in mapping the obscure terrain of complexity theory, which explores how hard computational problems are to solve. Last month, László Babai, of the University of Chicago, announced that he had come up with a new algorithm for the “graph isomorphism” problem, one of the most tantalizing mysteries in computer science. The new algorithm appears to be vastly more efficient than the previous best algorithm, which had held the record for more than 30 years. His paper became available today on the scientific preprint site arxiv.org, and he has also submitted it to the Association for Computing Machinery’s 48th Symposium on Theory of Computing.
For decades, the graph isomorphism problem has held a special status within complexity theory. While thousands of other computational problems have meekly succumbed to categorization as either hard or easy, graph isomorphism has defied classification. It seems easier than the hard problems, but harder than the easy problems, occupying a sort of no man’s land between these two domains. It is one of the two most famous problems in this strange gray area, said Scott Aaronson, a complexity theorist at the Massachusetts Institute of Technology. Now, he said, “it looks as if one of the two may have fallen.”
Babai’s announcement has electrified the theoretical computer science community. If his work proves correct, it will be “one of the big results of the decade, if not the last several decades,” said Joshua Grochow, a computer scientist at the Santa Fe Institute.
This is a big, big deal! Before we go further though, let's consider what we're talking about when we throw fancy words like "graph isomorphism" around:
Computer scientists use the word “graph” to refer to a network of nodes with edges connecting some of the nodes. The graph isomorphism question simply asks when two graphs are really the same graph in disguise because there’s a one-to-one correspondence (an “isomorphism”) between their nodes that preserves the ways the nodes are connected. The problem is easy to state, but tricky to solve, since even small graphs can be made to look very different just by moving their nodes around.
Babai's algorithm is a major step forward because it promises to move an entire class of problems into the arena of problems that can be solved within a reasonable amount of time. Even with the limitation that the new algorithm cannot successfully address the special case of supersymmetry in nodal structures, it still offers tremendous opportunity for practical applications.
What applications? Just consider the following short and incomplete list of what can become possible if you're able to determine that two graphs share the same nodal structure:
The development of the nearly comprehensive graph isomorphism algorithm is the biggest math story of 2015!
This is our final post for 2015. We'll be back early in 2016 with the next edition of our paycheck withholding tool. Until then, we'll leave you with one last math story for the holidays - how to send everyone home from your Christmas celebration a winner in the grand tradition of Christmas cracker pulling:
In the traditional approach, all dinner guests sit around the table, cross arms, and pull crackers with their two immediate neighbors. In this approach, each person has a 25% chance of winning zero crackers, so there are clearly inefficiencies in the system.
A better approach would be to use a system that starts by pairing up individuals and having each pair pull a single cracker. (For odd-sized groups one individual will have to stir the gravy or check the goose while this takes place.) Exactly [N/2] crackers are used in this round, with the same number of winners and losers.
Those who have not yet won continue as before until only a single individual remains. That individual then pulls a cracker with themselves and we are done.
Have a Merry Christmas and Happy New Year!
Labels: math
Welcome to the blogosphere's toolchest! Here, unlike other blogs dedicated to analyzing current events, we create easy-to-use, simple tools to do the math related to them so you can get in on the action too! If you would like to learn more about these tools, or if you would like to contribute ideas to develop for this blog, please e-mail us at:
ironman at politicalcalculations
Thanks in advance!
Closing values for previous trading day.
This site is primarily powered by:
The tools on this site are built using JavaScript. If you would like to learn more, one of the best free resources on the web is available at W3Schools.com.