I'm trying to do some improvements on a software made by my colleague (who left our team) and I would welcome some tips from you. Let me explain it:
What we have now (simplified explanation):
The system consists of server and client applications.Server receives images from a camera, compresses each individual image (frame) into JPEG using UIC (IPP) and sends the resulting compressed buffer over network (using TCP) to the client. Client gets the data, uncompresses each frame (agan using UIC) and displays it (as a texture in an Direct3D application made by me). After a whole frame is received, the client sends confirmation to the server and only then the server starts sending another new frame.
What I would like to try:
I imagine that compressing a video into a "stream" of individual JPEG pictures won't be the best approach when data size (bitrate) is concerned. I'm really not an expert at all, but wouldn't it be better to compress it as a video? So for example using something like H264. But the question is - is it possible to get live camera frames, compress them as a video (using UMC), send this data manually via TCP or UDP (WinSock) and then on the client application get somehow the individual uncompressed frames? If you think it is possible, could you please describe the basic implementation? (I don't want the whole code of course, just the idea.)
The point is that we are using wi-fi and we need to transmit as little data as possible, in real time (or with as low latency as possible). And also, we have to deal with cases when the wi-fi signal is quite low which in the current implementation means incredibly low framerates, because TCP tries to transfer the whole frame at all costs :) I would prefer a decrese in quality of the video over decrease in framerate.
I know the problem is quite wide and maybe the explanation isn't the best (also sorry for my English). But I will be grateful for any feedback from you, experts ;)