Will Monte dude, Baloo 's hack worked. I think the problem is somewhat simple in the outside of the toaster-
my theory based on everything i've read, and been hacking away at the last few days is this:
somehow when cron runs for checking the conversion queue, and there is still things to be converted, it doesn't exit php right away due to the PID being locked, after passing the arguments to ffmpeg. instead it process locks the pid that runs php7x.cgi, and will not kill the worker automatically when the work queue is completed. instead it spawns a new process each minute until the conversion que has been entirely parsed, and will try to spawn as many new workers as it wants, if it loses track of the PID count that is enque and grows memory usage very rapidly under multiple concurrent upload scenario. the locks don't auto flush when worker should be done, somehow, and a check could be initiated from minute to minute, when cron runs, and there is still conversion work to be done, it can be kept track of with additional code or some architectural changes.
if this takes longer than one minute, well, every time someone on the site uploads another video, it will continue to do this behavior, eventually exhausting php's allotted memory, and gets you the dreaded email from cron telling you php has exhausted the memory. the reason it quotes default values for this, is because it has not been able to find the extra local values, or even parse php.ini at the time of crash, and so it quotes php's fallback values, of 94 megabytes, and 64 megabytes - because at this point, php has actually crashed too many concurrent threads, again memory management in the subsystem architecture of movings around..
by adding ini_set into the utils file, with a value of 512M, it essentially sandboxes una into only using 512 megs for all php thread workers that get assigned a PID and work within the context of a conversion process, and makes una wait to assign any additional work to php until it can check out each file individually that exists in the sys_transcoder_queue, which safely prevents una from crashing php by accident, because it's excited to get things done, and does not release the lock and flush the PID workers in php.cgi... i wonder if this is any different in behaviour in non fast cgi, and or if there is a setting in phpfwm that can alleviate this issue entirely...
almost positive the crash happens because una tries to launch too many php instances.
interestingly, even after the conversions are done, the PID is not released / flushed to be unlocked, because right now I have 6 versions of php73.cgi running when I check top... this is , ultimately a memory management issue that causes una to be recommended to be used with such high gig cloud servers, I feel... I may be entirely wrong here, but the memory management of the scripts could be written to avoid all of this.. and when a site scales up to be very large, it is evident as to why this is a big issue.
of course, I'm not a programmer, more of an admin type and an artist, but i see the larger patterns in things, and if the process management and memory management of the behaviours for php73 to be checked again, then flushed after the threadworker is done his job, (perhaps call an init to the register that holds the work queue from mysql db, and if 0 in mysql under sys_transcoder_queue, then flush all php7x.cgi workers, and release the memory, and release the pid... ) pardon my non programmer understanding of the correct semantic rhetoric please, it will re-spawn on demand anyway as needed based on enduser STDIN from the website.... not to worry of losing things, if this is done correctly.
cc: Anton L
, Alex T⚜️