My mother cam across a handicam she has. It's resolution is not very high, but it's optics are very good!
Manually setting focus to ∞. So i had to point it at the moon and venus! Venus was more or less a blob, but i realized, Venus is brighter than full daylight here, so it's waaay overlit. I manually set it down, and it looks like it actually sees the crescent!
@astro #planet #astronomy
@astro using the power of programming, i cropped the to venus, and turned the pixelated image into a vague image with poorly conceived combination of video into an image.
First image is a raw frame, second it figures out the exact shift and adds them together where it actually draws on a bigger "canvas".
Last image took the positions per-color and then recombined everything.
That last one seems to keep the horizonal line artifact, a bit, not sure why.
the part of the image to use is done by threshhold and position by average of position weighed by pixel values.
This one does positions for all colors together again, which seems better. But it also averages positions obtained from nearby frame. Seems a little better.
@astro It looks better if i add some pixels to the cropping. Below is Saturn and Venus, both grainy & processed, 876 and 1450 frames respectively. I don't consider this resolving the rings. #planet #astronomy
The average pixel value should be centered, but is instead right-down.. Some thing is wrong there.. Hope it's just that.
Thought "what if made positions based on best location to overlay based on the previous method".. Last image in the list below.
It does look a little sharper to me? Took > 2060 seconds to compute 14s of footage, though..
Finally put the cropped videos in gifs.. Lol shouldah done this first, see what the input is properly.
Stupid opencv wouldn't produce videos.. But `.gif`s non-lossiness is bette anyway. (used `PIL.Image` and `gisicle`)
Made a little blogpost. Not much info, mainly a table of different planets and occasions with the cropped video and the output of different attempts to make a good image.
On my camcorder planets shit again.
This time i "simulated" images from a hypothesis image and compared to the real image. I added the difference to make a delta image(left)
To the right, the subsequent images made by "compensating" with the deltas, and then using the new image as new hypothesis.
The squared-difference score gets better, but the image visually worse.. I think maybe it's caused by an incorrect model of the pixels.. In the raw videos, you see those horizontal lines..
Lower score means the model matches the actually-seen images better. And the method does reduce the value. But i haven't done the math that it approaches the optimum nicely..
Remember seeing at one point a Lx=V ➡️ x solving method that used approximation, but can't find it. (maybe i can derive one based on this "stomp out the difference" idea)
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!