Intel Labs Scientist Kath Knobe spent some time with Clay Breshears and Kathy Farrel on Parallel Programming Talk to talk about what's new with Intel® Concurrent Collections, a programming language and software framework developed by Intel to express parallelism in applications.
Here is the video - at the bottom of this blog you will see topics discussed before our interview with Kath, (Including the Guide for Developing Multithreading Applications) and the questions asked during that interview along with additional info on CnC and our guest:
Here is the link to download the show:
It’s Tuesday and this is Parallel Programming Talk #129– Our guest is Intel Research Scientist Kath Knobe. We’ll be speaking with her in a few minutes.
Good morning Clay – I think we should talk a bit more about the GDMA – the new version of which was released on October 25 – last week.
- For those who may not know – What is the Intel Guide for Developing Multithreaded Applications – what is it, who uses it and what does it contain?
- When first published and what was the response?
- How did you decide there needed to be some revisions and additions?
- What is revised, what is new and where is it?
- Anything new? (A reader asked for a downloadable pdf and now that is available for the guide as a whole and for each article as well.
- The future of the guide?
And now for the News:
SC11 is only a few weeks away. I guess that might be why there isn’t much new these past few weeks. I guess everyone is saving all the cool hardware and software announcements for the conference. Clay what will you be doing in Seattle for this year’s conference?
We’d like to hear from you – do you have a show idea, a listener question or do you have a prediction about who will be on the TOP500 list? Clay, what’s the best way for our viewers to let us know what they’re thinking?
Clay: They can send us email at firstname.lastname@example.org
Kath Knobe is here to talk about Concurrent Collections Welcome to the show Kath Before we get into our topic and questions, can you tell us a little about yourself?
We’re talking about Concurrent Collections
- What is it?
- Advantages vs. disadvantages
- What was its origin and when? Were you trying to solve a problem? What was it?
- What kind of programmer uses CnC – and what for?
- To what does CnC lend itself?
- Easy to use? What kind of response have you gotten from users?
- What is in CnC’s future?
- How/where do you get it? / Where do our viewers learn more?
C++/CnC at Intel /en-us/articles/intel-concurrent-collections-for-cc
The Concurrent Collections Programming Model. Burke, Knobe, Newton & Sarkar. Rice Technical Report TR 10-12. http://compsci.rice.edu/TR
Java/CnC at Rice University - http://habanero.rice.edu/cnc.html
Haskell/CnC at Intel and Indiana University - http://hackage.haskell.org/package/haskell-cnc
Thank you Kath for being on the show.
If you have comments, questions, suggestions for guests or show topics that you think would be of interest, we’d love to hear from you. Our email address is email@example.com
Don’t forget our email address: drop us a line at firstname.lastname@example.org
Remember – all’s well that’s parallel.
About Our Guest
Kathleen Knobe Kathleen Knobe worked at Compass (aka Massachusetts Computer Associates) from 1980 to 1991 where she designed compilers for a wide range of parallel platforms including those at Thinking Machines, MasPar, Alliant, Numerix, and several government projects. In 1991 she decided to finish her education. After graduating from MIT in 1997, she joined Digital Equipment’s Cambridge Research Lab (CRL). She stayed at CRL through the DEC/Compaq/HP mergers and CRL’s absorption into Intel. She currently works in Geoff Lowney’s group (Software Solutions Group / Developer Products Division / Technology Pathfinding and Innovation - SSG/DPD/TPI). Her professional interests remained focused on parallelism either through compiler technology and language design.
Her major projects include Data Optimization (compiler transformations for locality), the Subspace Model of computation (a compiler internal form for parallelism), Array Static Single Assignment form (a method of achieving for array and loop-base code the advantages that SSA has for scalars), Weak Dynamic Single Assignment form (a global method for eliminating overwriting of data to maximize scheduling flexibility), Stampede and Ganga (streaming methodologies) and Concurrent Collections (CnC).