Pages tagged parallel:

MIT’s Introduction to Algorithms, Lectures 20 and 21: Parallel Algorithms - good coders code, great reuse
http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-thirteen/

Lectures
This is the thirteenth post in an article series about MIT’s lecture course “Introduction to Algorithms.” In this post I will review lectures twenty and twenty-one on parallel algorithms. These lectures cover the basics of multithreaded programming and multithreaded algorithms.
Axum
http://msdn.microsoft.com/en-us/devlabs/dd795202.aspx
Axum is a language that builds upon the architecture of the Web and principles of isolation, actors, and message-passing to increase application safety, responsiveness, scalability, and developer productivity. Other advanced concepts we are exploring are data flow networks, asynchronous methods, and type annotations for taming side-effects.
Axum is a language that builds upon the architecture of the Web and principles of isolation, actors, and message-passing to increase application safety, responsiveness, scalability and developer productivity.\
Paul Dix Explains Nothing: Breath fire over HTTP in Ruby with Typhoeus
http://www.pauldix.net/2009/05/breath-fire-over-http-in-ruby-with-typhoeus.html
Might be a good alternative to Net/HTTP for Context Hero. How hard would it be to incorporate caching?
blog dds: 2009.03.04 - Parallelizing Jobs with xargs
http://www.spinellis.gr/blog/20090304/
With multi-core processors sitting idle most of the time and workloads always increasing, it's important to have easy ways to make the CPUs earn their money's worth. My colleague Georgios Gousios told me today how the Unix xargs command can help in this regard. The GNU xargs command that comes with Linux and the one distributed with FreeBSD support a -P option through which one can specify the number of jobs to run in parallel. Using this flag (perhaps in conjunction with -n to limit the number of arguments passed to the executing program), makes it easy to fire commands in parallel in a controlled fashion.
The xargs -P flag can also be useful for parellelizing commands that depend on a large number of high-latency systems. Only a week ago I spent hours to write a script that would resolve IP addresses into host names in parallel. (Yes, I know the logresolve.pl that comes with the Apache web server distribution, and the speedup it provides leaves a lot to be desired.) Had I known the -P xargs option, I would have finished my task in minutes.
Multicore-Systeme mit xargs sauber auslasten.
MIT OpenCourseWare | Electrical Engineering and Computer Science | 6.189 Multicore Programming Primer, January (IAP) 2007 | Home
http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-189January--IAP--2007/CourseHome/index.htm
this lectures uses Playstation3 in their course.
bashreduce: A Bare-Bones MapReduce | Linux Magazine
http://www.linux-mag.com/cache/7407/1.html
heh. maybe useful for learning the mapreduce paradigm?
t
Parallel Programming in Haskell: A Reading List « Control.Monad.Writer
http://donsbot.wordpress.com/2009/09/03/parallel-programming-in-haskell-a-reading-list/
Here’s my basic “How to learn about parallel programming in Haskell” reading list.
A haskell reading material
Concurrency Hazards: Solving 11 Likely Problems In Your Multithreaded Code
http://msdn.microsoft.com/en-us/magazine/cc817398.aspx
Solving 11 Likely Problems In Your Multithreaded Code
MSDN Library 2008/11 - Joe Duffy
Server-side programs have long had to deal with a fundamentally concurrent programming model, and as multicore processors become more commonplace, client-side programs will have to as well. Along with the addition of concurrency comes the responsibility for ensuring safety. In other words, programs must continue to achieve the same level of robustness and reliability in the face of large amounts of logical concurrency and ever-changing degrees of physical hardware parallelism.
Google Technology RoundTable: Map Reduce
http://research.google.com/roundtable/MR.html
Matt is also the author of
anic - Project Hosting on Google Code
http://code.google.com/p/anic/
Faster than C, safer than Java, simpler than *sh
[[int\]
patterns:patterns [Parallel Computing Laboratory]
http://parlab.eecs.berkeley.edu/wiki/patterns/patterns
patterns for parallel programming
Fibers & Cooperative Scheduling in Ruby - igvita.com
http://www.igvita.com/2009/05/13/fibers-cooperative-scheduling-in-ruby/
Multiprocessing with Python
http://www.ibm.com/developerworks/aix/library/au-multiprocessing/index.html?ca=dgr-lnxw97Python-Multi&S_TACT=105AGX59&S_CMP=grsitelnxw97
Learn to scale your UNIX® Python applications to multiple cores by using the multiprocessing module which is built into Python 2.6. Multiprocessing mimics parts of the threading API in Python to give the developer a high level of control over flocks of processes, but also incorporates many additional features unique to processes.
In a previous article for IBM® developerWorks®, I demonstrated a simple and effective pattern for implementing threaded programming in Python. One downside of this approach, though, is that it won't always speed up your application, because the GIL (global interpreter lock) effectively limits threads to one core. If you need to use all of the cores on your machine, then typically you will need to fork processes, to increase speed. Dealing with a flock of processes can be a challenge, because if communication between processes is needed, it can often get complicated to coordinate all of the calls. Fortunately, as of version 2.6, Python includes a module called "multiprocessing" to help you deal with processes. The API of the processing module has some similarities to the way the threading API works, but there are also few differences to keep in mind. One of the main differences is that processes have subtle underlying behavior that a high-level API will never be able to completely abst
I, Cringely . The Pulpit . Data Debasement | PBS
http://www.pbs.org/cringely/pulpit/2008/pulpit_20081003_005424.html
The second time through the Appistry team tossed the database, at least for its duties as a processing platform, instead keeping the transaction -- in fact ALL transactions -- in memory at the same time. This made the work flow into read-process-write (eventually). The database became more of an archive and suddenly a dozen commodity PCs could do the work of one Z-Series mainframe, saving a lot of power and money along the way.