Full Version Singularity __HOT__
The Singularity Deck - Earth Full Set contains six suits: spades, hearts, diamonds, clubs, triangles, and ovals. Each Suit contains all standard playing card ranks (A thru K) plus a few extras including an 11 & 12, a singularity card, a City card, and an Ω card. They each include three copies of the A-rank and one copy each of the other ranks, for a total of twenty cards (A,A,A,2,3,4,5,6,7,8,9,10,11,12,J,Q,K,C,Ω, and singularity) per suit.
Full Version Singularity
Today Sylabs announced that Singularity 3.1.0 is now generally available. With this release, Singularity is fully compliant with standards established by the Open Containers Initiative (OCI), and benefits from enhanced management of cached images. Open source based Singularity continues to systematically incorporate code-level changes specific to the Darwin operating environment, as it progresses towards support for macOS platforms. These, and numerous other features of this latest release, remain the target of significantly expanded Continuous Integration (CI) unit and end-to-end testing.
Support for an OCI compliant runtime in Singularity complements existing support for the OCI image specification that was introduced in the previous major version of the software. Taken together, Singularity now delivers full compliance with the existing OCI standards for container images and runtimes. That SIF so effectively and efficiently encapsulates the OCI runtime not only validates the extensibility and utility of the format, it also amplifies the significant strategic investment Sylabs made in its relatively recent development.
Working in lockstep with members of the Singularity user, developer, and provider community, Sylabs is committed to ensuring the utmost in software quality in a completely transparent fashion. Although there is no substitute for the fully engaged efforts of this community, Sylabs has also taken steps to automate the quality assurance process through significantly expanded Continuous Integration (CI) unit and end-to-end testing.
The Singularity core was reimplemented through a combination of Go and C as of version 3.0.0 of the software. Although the version 3.1.0 release demonstrates that this strategic investment is already proving to be of significant value, the resulting CGO codebase presents a challenge to the broader community from a QA perspective. As testing complex CGO projects like Singularity continues to evolve, Sylabs is well placed at the forefront of this pressing requirement, and is already enumerating lessons learned and best practices within this developing testing field.
The first person to use the concept of a "singularity" in the technological context was John von Neumann.[5] Stanislaw Ulam reports a 1958 discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[6] Subsequent authors have echoed this viewpoint.[3][7]
The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity is Near, predicting singularity by 2045.[7]
Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[9][10] The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.
Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.
One version of intelligence explosion is one where computing power approaches infinity in a finite amount of time. In this version, once AIs are doing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would literally achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[20]
A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[4][21]
Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.[22][23] A number of futures studies scenarios combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The book The Age of Em by Robin Hanson describes a hypothetical future scenario in which human brains are scanned and digitized, creating "uploads" or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent artificial intelligence.[24]
Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[25][26][27] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4]
A speed superintelligence describes an AI that can function like a human mind, only much faster.[28] For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.[29] Such a difference in information processing speed could drive the singularity.[30]
As per Chalmers, "Good (1965) predicts an ultraintelligent machine by 2000,[18] Vinge (1993) predicts greater-than-human intelligence between 2005 and 2030,[4] Yudkowsky (1996) predicts a singularity by 2021,[20] and Kurzweil (2005) predicts human-level artificial intelligence by 2030."[7] Moravec (1988) predicts human-level artificial intelligence in supercomputers by 2010 by extrapolating past trend using a chart,[31] while Moravec (1998/1999) predicts human-level artificial intelligence by 2040, and intelligence far beyond human by 2050.[32] Per 2017 interview, Kurzweil predicts human-level intelligence by 2029 and billion fold intelligence and singularity by 2045.[33][34]
Prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore,[12] whose law is often cited in support of the concept.[37]
Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult.[38] Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.[citation needed]
The possibility of an intelligence explosion depends on three factors.[39] The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement.
Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[43][20] Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."[12]