Hello Guest it is April 26, 2024, 07:28:45 PM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - TomHubin

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 »
101
Video P*r*o*b*i*n*g / Re: Post your Point Clouds Here!
« on: April 19, 2008, 03:10:49 PM »
Hello Sid,

> I'm not sure what you mean by "USEFUL" FOV?

The laser plane Y dimension and XZ dimension that the camera sees. You can use a ruler or graph paper in place of the laser plane and see that with the camera under normal lighting.

>> What is the laser diameter in the working area?
Quote

> Uh, another Techie question that I'm not sure what you mean.

The collimated laser is often 1mm to 2mm thick as it leaves the laser module. If there is an adjustable laser focus lens then you can adjust the laser to be skinnier at the target. A skinnier laser is more concentrated and therefore more immune to background noise (e.g. shop lights).

Is your laser collimating lens adjustable? If so, have you adjusted the lens so that the laser is skinniest near the center of the scanning range? Can you estimate the thin dimension of your laser line near the center of your target range?

> The lights at my shop where I did the scanning were off.

What happens if the shop lights are on? Can you show something scanned with shop lights off and again with shop lights on, just to see if the shop lights are problematic?

>> Try scanning a sphere and see if the cloud appears circular as viewed from any direction.

> OK, I'll try it next week some time and let you know.

A ping pong ball is spherical with a nice surface. A tennis ball or baseball would show well but these have irregularities in the surface. Still, they should appear circular from any direction.

Tom Hubin
thubin@earthlink.net.

102
Video P*r*o*b*i*n*g / Re: Post your Point Clouds Here!
« on: April 19, 2008, 03:17:25 AM »
Hello Sid,

> Thought I would start a new thread just for Clouds.

Great idea.

> Here are a couple of some of my latest results.  Both of these scans were done
> with both Y & X passes.  Stitiching/striping courtesy of MachCloud!

This looks like a great data set. Also appears properly proportioned. Scanned hands that have been shown in other threads have appeared to have extra long fingers.

Are the stray points from the blanket?

Can you show a photo of your camera and laser setup?

What camera and laser are you using?

What is the useful camera field of view and how far is the target from the camera and from the laser?

What is the laser diameter in the working area?

Did you take this data with the house lights on?

Try scanning a sphere and see if the cloud appears circular viewed from any direction.

Tom Hubin
thubin@earthlink.net

103
Video P*r*o*b*i*n*g / Re: setting up cammera and laser
« on: April 18, 2008, 02:10:45 PM »

Here are a couple of observations  when I mounted my camera as shown on the drawing it is upside down.  This results in the my calibration line for my cube showing up lower rather than higher as the table (Zero).  Now, that I am sitting here typing this I am wondering if I could have used a - (neg) value in the cube size setting box?  But that's not what I did  ::)  Instead I used a setting in the camera driver to 'flip' the image so it appeared correctly on the screen.


Hello Sid,

I have mentioned the camera orientation options in another thread.

While you can rotate the camera at any angle from 0 to 360 degrees, it is more likely that the camera will be oriented in one of four positions. Rightside up, upside down, rotated +90 degrees, rotated -90 degrees. In addition, you might end up using a mirror someplace and that would produce a mirror image of one of these four. So there are 8 likely setups.

All of these can be handled with only three operations. Mirror X, Mirror Y, Rotate 90 degrees...in any order. Flipping an image is mach3's term for mirroring an image.

Tom Hubin
thubin@earthlink.net

104
Video P*r*o*b*i*n*g / Re: Improving Signal to Noise Ratio
« on: April 18, 2008, 01:25:27 AM »
Hello Mark,

> Just thumbing thru a Edmund Scientific catalog looking at filters to improve S/N.

You really should not need a filter. If the laser is focused to about 100 microns where it hits the target surface it will be a couple of orders of magnitude brighter than any other light source in the room hitting that same point.

You will find that an averaging AGC, which is what most cameras use, will increase the exposure time until the background does show. The handful of pixels that make up the iamge of the laser will be saturated. Background noise and signal saturation will both hurt accuarcy.

If AGC is used it should be the peak detecting type. This will shorten exposure time until the laser signal is not saturating the pixels. The relatively low light level of the background will show up as black background.

If AGC is not used then the software should set the exposure time that produces peaksa little bellow saturation. IMO peaks at about 70% of saturation is good.

A normalized intensity would be the sum of the pixels divided by the exposure time. That would allow comparison of targets of various colors and textures. Even a flat photograph can be scanned and produce a greyscale output.

> You can get very narrow band interference filters (10nM FWHM) around a whole slew of central frequencies.

Even cheap laser diodes have a well defined wavelength within a bandwidth of a few nanometers. The ones sold by AixiZ on eBay are 635nm and 650nm. The ones that I am thinking of measure 12mm diameter and 30mm long and have adjustable focus and an optional line generator lens with available angles from 22 degrees to 120 degrees.

The 5mw, 635nm modules need a 3.2vdc, 20ma power supply. I have tried one of these it it seems to work ok.

The 10mw, 650nm modules can use 3-5vdc for power. I hope to use one of these so that I can use 5vdc from the USB line. I don't have one yet so I don't know the current but I would be surprised if it is more than 40ma.

Tom Hubin
thubin@earthlink.net


105
Video P*r*o*b*i*n*g / Re: Improving Signal to Noise Ratio
« on: April 18, 2008, 01:03:13 AM »

PS I just had a look at the specs on the laser pointers on Edmund and the red ones vary from 640nM to 670nm between 3 models... the green ones are both 532nM. A quick look at all their solid state lasers seems to indicate that the green 532nM ones are more consistant.. not sure why.


Hello Mark,

The Green ones are Frequency doubled NdYag lasers.  NdYag lases at 1064nm. The laser beam is then passed through a frequency doubling crystal. That results in much of the 1064nm being converted to 532nm.

Some of the original 1064nm is probably also there but the human eye won't see it. An InfraRed test card might show that some IR is there. They could also filter out the residual 1064nm but I doubt that they would bother.

The red ones are made from a variety of materials. Each material has a preferred radiation wavelength with a small bandwidth of a few nanometers.

Tom Hubin
thubin@earthlink.net

106
Video P*r*o*b*i*n*g / Re: setting up cammera and laser
« on: April 16, 2008, 09:23:36 PM »
Hello,

Here is a general drawing of my design. I wanted to keep it simple for discussion so I deleted a lot of the details that make it easy to orient and machine in a fixture.

Note that the included angle between the laser and the camera is 90 degrees. That allows the lens to be attached to the camera in the normal way. Then rotate the the threaded lens to image the laser plane onto the ccd plane.

The camera will have a visible signal if the target crosses the laser plane within the camera's field of view.

A shorter focal length lens will have a greater angular field of view so will be less accurate. Also, inexpensive wide angle lenses are more likely to produce an image with barrel distortion. Barrel distortion can be corrected in software but that requires more complicated calibration and calculation methods.

Tom Hubin
thubin@earthlink.net

107
Hello,

Thanx for the file conversion suggestions.

I used the free Vellum viewer to convert from VLM to Windows BMP. Then used Windows Paint to convert the BMP to JPG and the BMP to GIF. Smallest file is GIF (13KB), then VLM (24KB), then JPG (62KB), and finally the very large BMP (565KB) with 256 colors. The default for BMP is 24 bit color and that produced a 1.5MB file.

I use black for the drawing and blue for the dimensions. JPG changed the blue to dark blue or purple. I am red/green colorblind so cannot be sure what the blue is actually changed to. My real objection is that the new color is so dark that it looks black to me unless I study it carefully. That defeats the purpose of using a different color for dimensions.

GIF is the smallest file and the best color match for my VLM file.

I am going to post the same drawing in GIF and JPG just to see how they come out.

VLM is a small line drawing file and there is a free viewer/converter. Is adding VLM to attachable the list an option?

Tom Hubin
thubin@earthlink.net

108
Hello Art,

I would like to post some drawings and sketches. I use Vellum which has a VLM suffix. There is a free downloadable reader for VLM files but you don't list VLM files as attachable.

JPG is not one of my VLM conversion options.

I can convert VLM files to Windows BMP but this is also not listed as attachable.

Can you add VLM and BMP to your list of acceptable attachments?

Thanx,
Tom Hubin
thubin@earthlink.net

109
A ramped or stepped surface may not be necessary if you choose to scan a volume (x, y, and z) to acquire data for a cube.

I think that using a sphere for a calibration target would be perfect but spheres are hard to do a least squares fit for. A ping pong ball has a standard diameter and a great surface for scattering light.

Tom Hubin
thubin@earthlink.net


110
Video P*r*o*b*i*n*g / Re: setting up cammera and laser
« on: April 13, 2008, 05:02:06 AM »
Hello Mark,

Taking a break from sorting income tax info.

> Still trying to pull up the specs of the other camera

Not important unless we talk specifics about this camera.

> it doesn't have all that alarm and image masking hardware (which may be handy in excluding all
> the laser line except the region of interest)

I would think that the meshing software will allow you to delete regions that are properly measured but not of interest...like the table that the object is sitting on. There should be no data outside the laser plane bounded by the camera's field of view. The laser is so much brighter than any other lighting that a short exposure time, like 1ms, will black out everything but the laser wherever it intersects the object.

> It has an adjustable shutter from off to 1/20000

If auto exposure is used it should be based on peak value and not the customary average over the entire view. Peak AGC would show the laser and black out everything else.

> The lens image is fine at this distance with no discernible distortion but I haven't looked
> that close. I have been playing with angles while imaging a steel ruler obliquely to get a
> sharp focus across the full frame.

> I didn't intend on having the laser plane along the Z but more in line with what you have
> been saying with the camera and lens setup in an adjustable mount to tilt the image plane
> correctly onto the CCD.

The payoff to allowing the CCD to be tilted is that you can locate the laser just about any way that you want to. Consider a simple laser pointer that is pointed straight down the Z axis and focused to a small diameter on the target 100mm below. Locate the lens off to the left of the laser about 75mm. Point the lens optical axis at the laser waist at the target. The object distance is 125mm since the laser and the lens and the target form a 3:4:5 right triangle.

Calculate the image distance and the image tilt.

clfl = camera lens focal length
od = object distance
id = image distance
latm = lateral magnification
otilt = object tilt (zero degrees is perpendicular to the optical axis)
itilt = image tilt (zero degrees is perpendicular to the optical axis)

Given:

cfl = 50mm

od = 125mm

Calculate:

id = 1/(1/clfl - 1/od)  = 1/(1/50mm - 1/125mm) = 83.333333mm

latm = -id/od = -83.333333mm/125mm = -0.667

otilt = atan(100mm/75mm) = 53.1301 degrees

itilt = atan(m*tan(otilt)) = atan(-0.889) = -41.634 degrees

Since the laser is the center of symmetry the camera can be located on the left or the right or front or rear or several cameras all around.

> I'll check out that site for line generators as the good ones at Edmund don't come cheap.

Also checkout Thorlabs. Not cheap there either but many more choices for laser, housing, lens combinations.
 
> BTW I have access to a Beamscan XY spot size meter so when I do get a laser
> (in the 670nM range) I'll be able to tell you exactly how gaussian and wide my beam is :).

That is great. I don't suppose we are neighbors. I am in Laurel MD (between DC and Baltimore MD).

> The main disadvantage I see with using my current lenses is that range of Z measurements
> will be greatly reduced than with a wider angle lens.

You can expect to get accuracy about 1% of your fov. To get better than that you will need to design and build everything just right. Since you are scrounging and making compromises you should be proud if you get 1% of your fov as accuracy.

> If I have a 90 degree included angle between the laser plane and the optical axis
> (equally spread each side ie. laser plane intersects the table at 45 degrees) then
> that will only give me sqrt 2 (1.4) times my measured FOV of 15mm or around
> 20mm of Z depth measurable. 

Actually, I think you need to divide by the square root of two. That makes it closer to 10.6mm z combined with 10.6mm x.

>This would be increases the more oblique the optic axis to the bed became but there
> are limits with this too as one need the camera not to run into the object its scanning
> not be too blind to the shadow area... decisions decisions.

This is the way that I am doing mine. A 90 degree included angle means that you really cannot have the laser vertical because that would require the camera to be horizontal. I am setting the laser 30 degrees cw of the vertical and setting the camera 60 degrees ccw of the vertical. The advantage to this approach is that I can swap lenses to change my fov since the ccd array is not tilted with respect to the optical axis. And since I am interested in teaching it is easier for amateurs to copy and vary. The disadvantage is that x and z both depend on the h and v camera coordinates.

However, the professional method is to point the laser straight down the z axis. Then z depends on either the camera h or v coordinate (depending on camera orientation) while x and y are just taken from the cnc machine's DRO. The disadvantages are that it is more difficult to understand and design and build, the image will be distorted (but predictable) so must be corrected in software, changing the lens focal length changes the image tilt.     

> Depending on what I wish to scan this may be way too small  I would be happier with am max
> depth scanable of 150mm. One could do slices by indexing the scan head down but again I am
> not sure how either Arts s/w would cope with this or how I would join the separate meshes
> latter (probably using Gmax or similar)

You get accuracy by having a small fov and scanning over as large a volume as necessary, as you suggest, to accumulate the point cloud shifted by machine coordinates so that it is relative to absolute space.

Also, accuracy depends on methodology. Two points can be uncertain by 0.010 inches and used to define a straight line. It would be best if those two points are as far apart as possible. Even better, many points taken along the length of the line reduce random error by a factor equal to the square root of the number of points. So 100 points along the line can reduce the error by a factor of 10.

> I really need to get the plugin working so that i can figure it out for myself but I've been
> concentrating on building my machine of late and I now have a windows\driver\mach3 conflict
> somewhere on my new laptop that crashes big time everytime I launch a video plugin from
> mach (blue screen of death) . I'll probably fall back onto the old desktop to experiment.

I am about to do that myself. I now have three laptops that will run Mach3, or the camera vendors software, or both side by side. I can access the camera with the Mach3 video window plugin but after that Mach3 slows to about 1% of its normal speed. I have to exit Mach3 and restart it to get it working right again.

> What kind of ballpark numbers for laser plane and optical axis (plus FOV ) are you shooting for?

Similar to yours. I started with a goal of one inch but scaled it down a bit so that I could make a one piece frame on my little Sherline 5410 CNC mill.

Tom Hubin
thubin@earthlink.net

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 »