The Paraguin Project is being conducted at the Department of Computer Science at the University of North Carolina at Wilmington .  The purpose is to develop a message-passing parallelizing compiler for distributed-memory parallel computer systems.  We are using the SUIF Compiler from Stanford University to build our compiler.  In fact, the Paraguin Compiler is simply a SUIF compiler pass.


Faculty and Staff

Current Students

Former Students


With the introduction of multi- and many-core systems, parallel computing is now a main-stream computer science topic.  Traditionally, parallel computing was considered an advance computer science topic.  Not only do we currently see processors with several cores and soon will see processors with hundreds of cores, single core machines are no longer produced except for very specialized and imbedded systems.  Futhermore, GPUs are providing another source of massively parallel architectures. Now it will and is becoming a topic that is required by all computer science programs. There has always been a lag between the development of hardware and the development of software to take advantage of that hardware.  Now the gap between hardware and software development will grow significantly larger.

Parallel computing is a difficult endeaver.  Typically, parallel software is develop by more gifted and talented computer developers.  This means that there will become a serious problem with developing software that exploits the multi-core systems.  At present, there is already fewer students graduating with computer science degrees than are needed by the software industry.  Add to that fact that many of the graduating students are neither trained to program parallel systems nor have the talent, there is will be a woeful shortaged of software developers for todays and tomorrows computers.

What is needed are two things: 1) as educators, we need to train computer science students to develop correct parallel program that can make use of todays multi-core systems; and 2) as scientists, we need to develop tools that make the development of parallel programs easier and more robost.

What is the Paraguin Compiler

The Paraguin Compiler produces MPI code suitable for execution on distributed-memory systems. The MPI standard is considered still to be a low-level abstration for expressing parallel algorithms.  The Paraguin Compiler raises the level of abstration to a similar level as OpenMP. Most experts would agree that it is easier to develop a parallel algorithm using OpenMP versus MPI.  We have found in teaching parallel programming, that undergraduates students find programming with OpenMP to much easier than with MPI. This is partially due to the use of shared-memory systems as opposed to distributed-memory systems and partially due to the simplier interface of OpenMP.  Shared-memory is a simplier systems for which to program because there is no inter-processor communication required; but rather synchronization. 

The Paraguin Compiler provides a similar abstraction as OpenMP but for distributed-memory systems.  Using a similar interface, the user can have the compiler generate the MPI code. However, we feel that such a level of abstraction is only a step in the right direction.  We will be introducing patterns (see [1] and [2]) in a future release of the software.  Using patterns, the developer can choose a pattern that matches their particular algorithms, provide the compiler with the patterns and details of the individual steps and data that needs to flow following that pattern, and the compiler will produce a parallel program.  Some patterns that will be implemented are: Scatter/Gather, Workpool, Pipeline, Stencil, Divide-and-Conquer, and All-to-All. We are in close collaboration with the "Seeds Framework", which is an environment for programming parallel systems using the above patterns. Pattern programming using the Seeds Framework as well as the Paraguin Compiler was presented at a Workshop at SIGCSE 2013 [3].

In the current version of Paraguin, the Scatter/Gather and the 2-D Stencil patterns have been implemented. 


We have decided the best option would be for the user to upload a source file, have it compiled, and download the resulting source file. The reason is because if we do a source-to-source compilation, then the backend can be done by your local compiler.  The allows us to avoid making backends for all the combinations of processors, operating systems, and architectures.

File Submission Page

The user manual is available here:


Frequently Asked Questions


  1. C. Ferner, B. Wilkinson, B. Heath, “Toward using higher-level abstractions to teach Parallel Computing”, in the proceedings of the Third NSF/TCPP Workshop on Parallel and Distributed Computing Education (EduPar-13), held in conjunction with the 27th IEEE International Parallel & Distributed Procession Symposium (IPDPS 2013), Boston, MA, May 20, 2013.
  2. B. Wilkinson, J. Villalobos, and C.S. Ferner, "Pattern programming approach for teaching parallel and distributed computing", in the proceedings of The 44th ACM Technical Symposium on Computer Science Education (SIGCSE2013), Denver, CO, March 8, 2013.
  3. B. Wilkinson and C. Ferner, "Workshop 31: Developing a Hands-On Undergraduate Parallel Programming Course with Pattern Programming”, workshop at The 44th ACM Technical Symposium on Computer Science Education (SIGCSE2013), Denver, CO, March 9, 2013.

This page was last updated: May 12, 2014