mardi 19 juin 2012
samedi 16 juin 2012
Tech industry experts are predicting that demand for certain tech roles will dramatically decline over the next decade as organisations switch to cloud computing.
By 2020 the majority of organisations will rely on the cloud for more than half of their IT services, according to Gartner's 2011 CIO Agenda Survey.
After organisations have switched to the cloud the number of staff needed to manage and pr...
jeudi 14 juin 2012
dimanche 10 juin 2012
From ACM TechNews:
Oh, That's Near Enough
The limits of Moore's Law, which states that about every two years, computer chips will become twice as fast and twice as small, may finally be in sight. Transistors currently measure as small as 22 nanometers in width, and as they shrink, keeping them cool and error-free becomes more difficult. Now researchers around the world are designing smaller, faster, and more energy-efficient "sloppy chips' that can handle errors in their operation. An international team of researchers at Rice University, the Swiss Center for Electronics and Microtechnology (CSEM), and Nanyang Technological University found that by reducing the operating voltage, sloppy chips could deliver equivalent performance to ordinary chips using 25 percent of the energy. Another technique, known as pruning, involves wiring chips so more power is delivered to more important areas, while areas that compute non-essential data are given less power or removed altogether. Tests at CSEM found that pruned circuits were twice as fast, consumed half as much energy, and were half the size of conventional circuits. CSEM also is developing an error-prone chip for audio-visual processing in mobile phones that dispatches different processing tasks to the appropriate circuitry. Another approach to managing errors, called asymmetric reliability, uses error-prone circuits for number crunching to save power and run faster, says Stanford University's Subhasish Mitra.
View Full Article
From ACM TechNews:
Human Memory, Computer Memory, and Memento
(06/05/12) Steven Cherry
University of Michigan professor John Laird describes the state, operator, and result (Soar) cognitive architecture as an artificial intelligence system that functions like a brain to solve problems. He says all matching rules fire in parallel in Soar, while selecting the next operator is the locus of decision making. "What we're trying to do in Soar is combine lots of rules at the same time, so when it's in a given situation, many rules will match, and instead of picking one, it will fire all of them, and instead of those doing actions, say, in the world, instead what they're doing, the first phase of that, is proposing separate actions," Laird notes. He says more memories have been added to Soar so that it can not only analyze rules to ascertain the next course of action, but also access these other memories that supply additional data about what to do next. Laird points out that a key element of Soar's problem-solving capability is a framework for episodic memory, a vital component of human-level cognition. He speculates that at some point the same type of task-specific or domain-specific knowledge included in IBM's Watson supercomputer will need to be incorporated into Soar.