One trick for making the automated edit better is by only shooting very short bursts of video. Regardless off whether you leave the first edit alone and use that, or decide to edit, the process is still simpler and less time consuming than virtually every other video editor out there. Using something like Google or Microsoft's systems that can recognise objects, and understanding what's happening in the video footage and then being able to perform an action based on that. We can't help but feel with something like this, implementing machine learning tools into the software would help kick it up a notch.
Once or twice it did managed to show the glorious leap through mid-air and crashing into the pool, and even turned it into a slow motion scene. For a few of those instances, the QuikStories cut would cut right as the dog was about to jump and then cut straight to the dog being in the water, missing the all important jump through mid-air and subsequent splashing in in to the water. For instance, we captured multiple dogs running and jumping in to a pool. If there's any criticism, it's that the app doesn't always choose the best places to cut or choose the best footage.