A computer with multiple processors that can all be run simultaneously on parts of the same problem to reduce the solution time. The term is nowadays mostly reserved for those massively parallel computers with hundreds or thousands of processors that are used in science and engineering to tackle enormous computational problems.
There are two fundamental divisions in parallel computer architecture. The first is between those architectures in which each processor has it own memory space and communicates with others by message passing, and those architectures in which all the processors communicate through a shared memory (Shared-Memory Multiprocessors). The increasing number of high-end PCs and servers that contain more than one processor fall into this latter category.
The other fundamental division is between those computer architectures in which each processor executes the same program on a different data item (Single-Instruction Multiple-Data or SIMD)and those in which each processor executes a different program (MIMD or multiple-instruction multiple-data). Within these subdivisions, the processors can be connected together in many different ways (their Topology) which profoundly affect the efficiency of communication between them.