The abstract for US patent 8,127,233, application number 11/860,008, reads:
"Frames of user interface graphical data can be remotely rendered at a client during a remote session with a server by providing graphical data commands to the client. The commands include motion commands derived from objects that change position between a current frame and a new frame and delta commands derived from differences between the frames. The delta commands can be generated from a frame update after applying motion commands or without applying motion commands. A server identifies moving objects having a first position in the current frame and a second position in the new frame, generates motion hints for the moving objects, and reduces the motion hints based on collision detection, motion verification and other factors. Motion commands are generated for the reduced set of motion hints and applied to a copy of the current frame at the server. Differences between the modified current frame and the new frame are then encoded as delta commands. The server then sends the motion commands and delta commands to the client. The client receives and applies the commands to the current frame to render the new frame."
This relates to our senior design project because it uses similar pixel differencing methods that we have envisioned using to determine when activity has taken place in our security videos.