Italiano - English

Analisi di memoria su std::queue come buffer di cv::Mat. Parte 1, semplici test.

Here is shown that both cv::Mat and Standard Library Containers work under memory recycling. We found that using std:queue as buffer for OpenCV Mat, memory requirements depends on size of the queue. We conclude that containers like std::queue can be used effectively as buffer for OpenCV Mats.


A buffer is a container where to store elements waiting further processing. FIFO buffer is known as queue while LIFO buffer is known as stack.

A buffer is often required with asynchronous producer/consumer scheme. The producer push elements in the buffer, the consumer pop elements from the buffer. The consumer follows the producer. If the consumer is on late the buffer size will increase, if consumer is quite fast the buffer size decrease to 0.

With video processing is common to have one or more grabbing thread (producer) to reads images from the camera and one or more processing thread (consumer) to perform image processing and a queue in the middle.

When the grabber gets a new frame it pushes the frame in the buffer. The processor is monitoring the buffer, when a frame is available the processor retrieves the frame from the buffer than performs the processing. This is quite simple to understand but reader should be aware about memory requirements:

  • When a new frame comes from the camera it overwrites previous frame therefore 1 memory block is required by the grabber. But, to store the frame in the buffer we need to create a copy of the frame because it will be overwritten ... this means that we need a memory block for each grabbed frame just because we are using a buffer.
  • The processor retrieves a frame from the buffer making its own copy of the frame.This copy can overwrites previous copy therefore 1 memory block is required by the processor. At this point the buffer can release the retrieved frame and free related memory.

When using buffer, to grab N frames we have need of N+2 memory allocation/deallocation ! This can exploit in huge memory requirement, high fragmentation or a lot of garbage.

User should note that, when grabber and processor are well designed they run at comparable speed. Therefore the buffer will hold few frames in case of any latency or occasional delay. This means that only few memory block are needed at same time.

We are trying to understand if N allocations/deallocations runs under memory recycling so that the memory requirement is optimized and reduced to only few memory block.

From MEM50-CPP. Do not access freed memory: It's at the memory manager's discretion when to reallocate or recycle the freed memory. When memory is freed, all pointers into it become invalid, and its contents might either be returned to the operating system, making the freed space inaccessible, or remain intact and accessible."

We will start to investigate memory consuming (or recycling) in special case of std::queue with OpenCV Mat as elements. The concepts also are valid for other containers like std::stack, std::deque or std::vector.

Memory recycling with std::queue

Let's start to investigate if multiple push/pop operations are memory consuming or if released memory is recycled.

int StdQueueMemRecyclingTest()
    std::queue<cv::Mat> myQueue;
    cv::Mat mat1, mat2,mat3;
    void * front1, *front2;
    cout << "Push Mat1 \t addr: " << &mat1 << endl;
    front1 = &(myQueue.front());
    cout << "on the queue \t addr: " << front1 << endl;
    cout << "POP: Removes an element from the front of the queue" << endl;
    cout << "Push Mat2 \t addr: " << &mat2 << endl;
    front2 = &(myQueue.front());
    cout << "on the queue \t addr: " << front2 << endl;
    if (front1 == front2)
        cout << "GOOD! std::queue is recycling memory." << endl;
    return 0;


Push Mat1 	 addr: 0x24FA80
on the queue 	 addr: 0x339C50
POP: Removes an element from the front of the queue
Push Mat2 	 addr: 0x24FB00
on the queue 	 addr: 0x339C50
GOOD! std::queue is recycling memory.

Above example shows that:

  1. std::queue::push  creates a copy of the elements at the end of the queue ( front1<>Mat1 pushed elements have different address from sources);
  2. the memory manager  recycles memory ( front1 == front2 a new element pushed after a pop will receive same address therefore new element overwrites removed one);

Memory management by cv::Mat

cv::Mat has a powerful automatic memory management (see doc: ver 2.4.x ,latest version:here and here) that allocate/deallocates the memory automatically. This helps developers to forget about memory management.

cv::Mat consists of header that holds information about the Mat and a pointer to a memory block that holds the Mat data matrix.

The assignment operator mat2 = mat1 (and the copy constructor cv::Mat mat2(mat1) ) copies only the Mat header. This produces distinct Mats headers that share same memory for data. This operation is O(1) and is fast because only header (small) information will be copied (see my answer on similar topics:

Pushing a cv::Mat on a queue calls the copy constructor cv::Mat::Mat(const Mat & m) .

No data is copied by the copy constructor. Instead, the header pointing to m data or its sub-array is constructed and associated with it. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() or Mat::copyTo

This has direct (unwanted) implication when a cv::Mat is used and pushed in a loop. Take a look at this example:

int MatAssignmentTest()
    std::queue<cv::Mat> queueOfMats;
    cv::Mat mat1(1, 3, CV_8UC1);
    for (int i = 0; i < 5; i++) {
        mat1 = i;                // modify the mat data
        queueOfMats.push(mat1);  // push the mat with new data
        cout << "Mat1:\t" << mat1 << endl;
    while (!queueOfMats.empty()) {
        cout << "Queue:\t" << queueOfMats.front() << endl;
    return 0;


Mat1:	[  0,   0,   0]
Mat1:	[  1,   1,   1]
Mat1:	[  2,   2,   2]
Mat1:	[  3,   3,   3]

Queue:	[  3,   3,   3]
Queue:	[  3,   3,   3]
Queue:	[  3,   3,   3]
Queue:	[  3,   3,   3]

May be you are surprised to see that all Mats in the queue have same (latest) values even if they are pushed with different data. This is because the queue contains a N copy of the same Mat header that points at same data.

Result is formally correct but is bad for a queue ! To take a full copy, user should either turn this simple assignment into an expression or write a copy constructor or use the Mat::copyTo or Mat::clone() methods. For example this will works fine:

queueOfMats.push(cv::Mat());       // create a new empty Mat
mat1.copyTo(queueOfMats.front());  // take a full copy

A simple container element for cv::Mat

Mat::copyTo works fine but, for those Mats that should be buffered, a copy constructor would make things easier. As solution we can encapsulate cv::Mat class into our own simple structure with a copy constructor:

struct myMat
    cv::Mat img; /// Standard cv::Mat
    myMat(){};   /// Default constructor
    ~myMat(){};  /// Destructor (called by queue::pop)
    /// Copy constructor (called by queue::push)
    myMat(const myMat& src)

Now we can create a queue of our structure and use standard push/pop operations like below:

int myMatAssignmentTest()
    std::queue<myMat> queueOfMyMats;  // a queue of myMat
    myMat myMat1;                     // an object of myMat
    myMat1.img = Mat(1, 3, CV_8UC1);
    for (int i = 0; i < 4; i++) {
        myMat1.img = i;
        queueOfMyMats.push(myMat1);   // this call myMat copy constructor
        cout << "Mat1:\t" << myMat1.img << endl;
    while (!queueOfMyMats.empty()) {
        cout << "Queue:\t" << queueOfMyMats.front().img << endl;
    return 0;

and...result is as expected:

Mat1:	[  0,   0,   0]
Mat1:	[  1,   1,   1]
Mat1:	[  2,   2,   2]
Mat1:	[  3,   3,   3]

Queue:	[  0,   0,   0]
Queue:	[  1,   1,   1]
Queue:	[  2,   2,   2]
Queue:	[  3,   3,   3]

Going deep into memory analysis

Each push operation creates a copy of myMat. This calls myMat copy constructor that creates new data for each pushed element. Supposing that our images are 800x600 RGB:

H = Size of queue element:  H = sizeof(MyMat);   // H = 112 bytes;

D = Size of image data: D = 800 * 600 * 3 * 1;  // D = 1.37 Mbytes

Each push requires  H+D bytes of memory. While H is fixed and small (around 100byte) D could be huge for large image. Pushing 100 of our images requires about 140 Mb !

A queue works in the middle of producer/consumer scheme. Producer pushes elements on the queue, consumer retrieves elements from the queue using front/pop. In common cases a queue has short size but a lot of push/pop.

From the 1st paragraph we already know that std::queue recycles memory H : a new element (push) overwrites removed (pop) one. But H memory is related to the Mat header, while, the huge requirement comes from the D memory that is managed by cv::Mat.

We are going to show that the memory manager also recycles memory allocated for the image data by cv::Mat . In short, N consecutive push/pop operations will require only Smax* (M+D) memory, where Smax is max size of the queue while producer/consumer operates push/pop. Because  Smax<< N  a queue of Mat is memory effective !

Sincerely is uncommon for a queue to have a big size. A very long queue shows that producer is too fast or consumer is too slow, in this cases huge memory is a must.

The following test performs a sequence of push/pop/push to demonstrate that only 1 Mat element and only 1 image data will be allocated even if we have 2 push:

int myMatQueueMemRecyclingTest()
    std::queue<myMat> queueOfMyMats;    // a queue of myMat
    myMat myMat1, myMat2;               // an object of myMat
    void *front1, *front2;
    void *imgFront1, *imgFront2;
    myMat1.img = Mat(1, 3, CV_8UC1);
    myMat2.img = Mat(1, 3, CV_8UC1);
    myMat1.img = 1;
    myMat2.img = 2;
    cout << "Push myMat1 \t addr: " << &myMat1 << "\t"
        << "myMat1.img \t addr: " << (void*)myMat1.img.ptr() << "\t"
        << "data:" << myMat1.img << endl;
    front1 = &(queueOfMyMats.front());
    imgFront1 = queueOfMyMats.front().img.ptr();
    cout << "on the queue \t addr: " << front1 << "\t"
        << "queue.img \t addr: " << imgFront1 << "\t" 
        << "data:" << queueOfMyMats.front().img << endl;
    cout << "POP: Removes an element from the front of the queue" << endl;
    cout << "Push myMat2 \t addr: " << &myMat2 << "\t"
        << "myMat2.img \t addr: " << (void*)myMat2.img.ptr() << ","
        << "data:" << myMat2.img << endl;
    front2 = &(queueOfMyMats.front());
    imgFront2 = queueOfMyMats.front().img.ptr();
    cout << "on the queue \t addr: " << front2 << "\t"
        << "queue.img \t addr: " << imgFront2 << ","
        << "data:" << queueOfMyMats.front().img << endl;
    if (front1 == front2)
        cout << "GOOD! std::queue is recycling memory." << endl;
    if (imgFront1 == imgFront2)
        cout << "GOOD! cv::Mat is recycling memory." << endl;
    return 0;


Push myMat1   addr: 0x2CF380	myMat1.img   addr: 0x1744C0, data:[ 1, 1, 1]
on the queue  addr: 0x179E40	queue.img    addr: 0x17A120, data:[ 1, 1, 1]
POP: Removes an element from the front of the queue
Push myMat2   addr: 0x2CF400	myMat2.img   addr: 0x179C60, data:[ 2, 2, 2]
on the queue  addr: 0x179E40	queue.img    addr: 0x17A120, data:[ 2, 2, 2]
GOOD! std::queue is recycling memory.
GOOD! cv::Mat is recycling memory.
  1. push creates a copy of myMat1 at the end of the queue. The copy of myMat1 on the queue has been created at 0x179E40. The copy operation is done by copy constructor that creates a copy of the data too. The new data is created at 0x17A120.
  2. pop removes the first element from the queue. This calls the removed element's destructor myMat::~myMat() that also calls cv::Mat::~Mat() and the memory is freed (but the memory manager doesn't return it to the operating system)
  3. push creates a copy of myMat2 at the end of the queue and a copy of the data too.

The example shows that at 2nd push the memory manager allocates the new element at 0x179E40 that is same address of removed element. In addiction, the copy of the data also has been allocated at 0x17A120, again same address of removed data. Values are different but memory is same, that is memory recycling.

We can conclude that a std::queue of Mats is memory effective thanks to memory recycling operated by the memory manager. Required memory depends on the size (length) of the queue despite of how many "push" we will perform.

The Part 2 of this article shows a real test case using 2 threads and a tread safe queue of cv::Mat.

Vedi anche:

Analisi di memoria pkQueueTS come buffer di cv::Mat. Parte 2, caso reale.

10-12-2016 5878 5

In Analisi di Memoria-Parte 1 si conclude che una coda std::queue con OpenCV Mats utilizza la memoria in modo efficiente. In questo articolo testiamo la nostra coda pkQueueTS con un grabber thread e un processor thread e analizziamo l'utilizzo della memoria per confermare le conclusioni preliminari su riciclo e efficienza.

Vota questa pagina:

0 Commenti:

Il codice, le illustrazioni e gli esempi riportati in questa pagina sono solo a scopo illustrativo. L'autore non prende alcuna responsabilità per il loro utilizzo da parte dell'utente finale.
Questo materiale è di proprietà di Pk Lab ed è utilizzabile liberamente a condizione di citarne la fonte.