My mother cam across a handicam she has. It's resolution is not very high, but it's optics are very good!
Manually setting focus to ∞. So i had to point it at the moon and venus! Venus was more or less a blob, but i realized, Venus is brighter than full daylight here, so it's waaay overlit. I manually set it down, and it looks like it actually sees the crescent!
@astro #planet #astronomy
@astro You can do quite a bit of astronomy with simple equipment.
🎞️ COOL SPACE PICS WITH A PHONE AND BINOCULARS | Astrobiscuit (22min)
I think probably better to get a lot of individual pictures than exposures.. Probably software can combine the data better than just longer exposures. But then maybe his phone or processing software didn't do that...
@astro using the power of programming, i cropped the to venus, and turned the pixelated image into a vague image with poorly conceived combination of video into an image.
First image is a raw frame, second it figures out the exact shift and adds them together where it actually draws on a bigger "canvas".
Last image took the positions per-color and then recombined everything.
That last one seems to keep the horizonal line artifact, a bit, not sure why.
the part of the image to use is done by threshhold and position by average of position weighed by pixel values.
This one does positions for all colors together again, which seems better. But it also averages positions obtained from nearby frame. Seems a little better.
@astro It looks better if i add some pixels to the cropping. Below is Saturn and Venus, both grainy & processed, 876 and 1450 frames respectively. I don't consider this resolving the rings. #planet #astronomy
The average pixel value should be centered, but is instead right-down.. Some thing is wrong there.. Hope it's just that.
Thought "what if made positions based on best location to overlay based on the previous method".. Last image in the list below.
It does look a little sharper to me? Took > 2060 seconds to compute 14s of footage, though..
Looks like it's not really improving the images..
Maybe try some more later, eventually.. If i wanted to do it faster/use the whole video, might use OpenCVs Template Matching functionality..
Certainly can waste a lot of computing power on camcorder shots..
I am not sure if my approach gets the best positions to add the images at.. Eh maybe try some selection later too; have a threshhold of motion-speed or match distance above which frames are not used..
Suppose probably the images won't get better, but still got things to try..
Working through the math of what i.. thought i was doing, turns out my approach is incorrect
There is a measured(lower resolution) -> estimate, and the estimates are done by figuring where on the estimate the measured image is and then averaging(weighed average if i had error bars) it over.
However in deriving it, it would need to assume two estimate-image pixels are the same one if they're the same measured-image, which is obviously not true.
Blep it means an n×m estimate-image needs an (n×m)^2 matrix linear equation.
Or 10^8 elements, at least the naive approach. However the matrix is sparse.
Finally got round to correcting for that.. well, it's doing 1pixel per second, and has to deal with.. eh a 30x10x10 pixels per second.. And then at the end it has to solve this linear problem with a 13806x13806 matrix...
Also it ate all my memory and froze Linux. (Blame Linux for that last bit. Why can't it stop the program before it freezes everything, blep.) #ItsAdisaster!
Finally put the cropped videos in gifs.. Lol shouldah done this first, see what the input is properly.
Stupid opencv wouldn't produce videos.. But `.gif`s non-lossiness is bette anyway. (used `PIL.Image` and `gisicle`)
Made a little blogpost. Not much info, mainly a table of different planets and occasions with the cropped video and the output of different attempts to make a good image.
On my camcorder planets shit again.
This time i "simulated" images from a hypothesis image and compared to the real image. I added the difference to make a delta image(left)
To the right, the subsequent images made by "compensating" with the deltas, and then using the new image as new hypothesis.
The squared-difference score gets better, but the image visually worse.. I think maybe it's caused by an incorrect model of the pixels.. In the raw videos, you see those horizontal lines..
Lower score means the model matches the actually-seen images better. And the method does reduce the value. But i haven't done the math that it approaches the optimum nicely..
Remember seeing at one point a Lx=V ➡️ x solving method that used approximation, but can't find it. (maybe i can derive one based on this "stomp out the difference" idea)
@jasper ah nice, but that explains it 😅
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!