Hello Guest it is December 01, 2020, 03:31:19 AM

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - TomHubin

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 »
1
Video P*r*o*b*i*n*g / Re: video probe, tool or toy?
« on: September 13, 2009, 02:14:00 PM »

I have not played with toms plugin as I have not been able to finds a single cohesive thread on how to set it up yet. i have ample line lasers and box full of webcams..

can someone post a link to a step by step setup for 3d scanning on a 3 axis mill?  (1)  here is the software, (2) you  need this hardware (3) heres is how you hook it up


Hello Chris,

Sorry about not having a straightforward setup tutorial. Everything is in development and somewhat experimental.

I would suggest that you start with a search for "Hubin" or "thubin" at http://www.machsupport.com/forum/index.php#2 . That will bring up 32 posts from me. Most have something to do with 3d scanning. Browse through those and many from other people on the Video Probing forum.

Then ask questions and I will try to answer.

Tom Hubin
thubin@earthlink.net

2
Video P*r*o*b*i*n*g / Re: video probe, tool or toy?
« on: September 10, 2009, 03:35:30 AM »
Hello,

Timing depends on design resolution and motion speed. Also a little on computer speed but there are ways around that if necessary. Axes are arbitrary and can be setup any way that you like.

So lets say that you want to scan 600mm wide by 2000mm length by 300mm depth and you want 1mm resolution.

I would choose the optics for 100mm x 100 mm field of view since I expect about 1% of that for resolution. So a single frame would scan a line about 100mm long in the y direction and produce over 100 points, probably closer to 500 points . Then move the optics 1mm in the x direction and collect the next frame. Do this 2000 times to complete a 100mm x 2000mm path.

Then move y 100mm and go back to the start of x. Scan the next 100mm x 2000mm path. Do this six times to complete the 600mm wide by 2000mm long area.

Then move z 100mm and repeat all of the above. Do this 3 times to include the 300mm depth.

So we have stopped and grabbed a frame 2000 x 6 x 3 = 36000 times. How fast can you do that? Once per second is 10 hours. Ten frames per second is 1 hour. That would require that the motion ramp up for 0.5mm then ramp down for 0.5mm then grab and process a frame in 0.1 second.

This, of course, only gets the view from one side. If you want the view from the sides and the rear then you need to do this three more times.

Alternatively, you can have multiple scanners working in parallel. That is more complicated hardware but the live model is not so tired.

As for which system you use, I think the math for time vs resolution needs to be done for each system that you consider. For example, if you only need 2mm accuracy then half as many frames are needed in each axis. That can be 1/8 the time if the motion system can keep up.

What resolution do you think you need?

How fast do you need to complete the scan from all sides?

Tom Hubin
thubin@earthlink.net


3
Video P*r*o*b*i*n*g / Re: video probe, tool or toy?
« on: September 08, 2009, 02:45:21 AM »

I have a couple of three axis mills I've built over the years and I've build a pretty good touch probe to go with it.

My life would get a lot less complicated if instead of smearing a model with plaster and such for more then an hour or two at a time I could sweep then with a laser and essentially machine a body-double from foam in a very short time. An hour modeling session could yield a dozen scans.  An added plus to the model is that no one has to touch them or get messy stuff stuck to their boobies. (pretty clear advantages here, from the models perspective anyway)

I see a lot of OK scans of kick knacks and such but seriously it wouldn't be too hard to make a mold from a plastic statue.

Lawrence
>

Hello Lawrence,

I don't see any technical problems with scanning a human body.

The method that I use is to scan using something like a 3 axis mill with the spindle replaced by the scan device (camera and laser and optics). So, for scanning a human body, you need a three axis stage that is large enough to scan a human body. Let's say a 600mm wide by 2000mm length by 300mm depth.

To make specific recommendations I would need to know the accuracy requirements and maybe the timing requirements since a live person must be relatively still during the scan.

Accuracy on the order of 1% of the camera field of view is reasonable to expect. So if you want 1mm accuracy then you are pretty much limited to a 100mm field of view. Since a human body is probably 6 times this width you would need to make six passes, moving the scanner over 100mm for each successive pass.

Each lengthwise pass could be 1mm steps for 2000mm length.

If you need finer detail then you could use a scanner intended for finer accuracy and make more passes to cover the area.

Keep in mind that the Z field of view is also limited. So multiple Z axis scans would be necessary if the model is well endowed.

To get all sides of the model you would have to role the model over and try again. The multiple scans would have to be assembled with CAD software that can work with point cloud data.

If you are serious about making this convenient then consider a scanner mounted to travel around a standing model or a model reclining on something transparent. This would use x, y, z and theta for axes.

Say more about how you would want to do this and I will try to come up with more specific suggestions.

Tom Hubin
thubin@earthlink.net 

4
Video P*r*o*b*i*n*g / Re: video probe, tool or toy?
« on: August 23, 2009, 06:26:15 PM »
At this point in the development has anybody been able to actually scan an object and end up with a usable tool path?

Has anyone actually milled a duplicate of the item scanned with satisfactory results?

The purpose for the optical and the mechanical probes is to gather data on the target. The optical method is faster and collects a lot more data but in the end you just have a point cloud data set.

Because I have an optical background I have focused on the optical methods. First to make an inexpensive probe then to write some usable software while taking advantage of Mach's motion control.

I too would like to see where the data goes next. I am sure this part of the problem has already been solved for the folks that use touchprobes.

To me, though, it seems that the next step is to read the points into CAD software where a human operator can fit lines, arcs, and surfaces to sections of the data set. Then, as with any other finished drawing, create the Gcode.

Except for showing off at a trade show I doubt that you can just scan to create a point cloud then spit out Gcode to do each point. Point by point execution would be painfully slow.

Anyway, I think that probing the minds of the touchprobe users might yield some useful info.

BTW, there are examples in this Video Probing Forum of things made using data from an optical scan. One that comes to mind is a giant boot used as an attention getter in front of a camping or climbing store.

Tom Hubin
thubin@earthlink.net 

5
Video P*r*o*b*i*n*g / Re: scanning human body?
« on: August 22, 2009, 12:49:34 PM »
What is Scan 3D plugin?

Thanks

There are only 2 plugins that I know of for mach3 to do video scanning for 3d profiling. The original one is written by Art and the more recent one by me. I call mine Scan3d for now and have posted it along with open source code.

The most recent activity is here:

http://www.machsupport.com/forum/index.php/topic,10959.0.html

Tom Hubin
thubin@earthlink.net

6
Video P*r*o*b*i*n*g / Re: scanning human body?
« on: August 21, 2009, 03:48:53 PM »

I'm looking to scan a living human body. From the scan I need to get a usable file I can machine into something useful. (think of machining a mannequin from foam)

I don't need resolution to see fingerprints and the like but would like to capture the basic curves of the body.

Is this do-able with the video probing software that is currently being developed as a Mach plugin?

Yes, if you have the right size equipment.

My Scan3d plugin is intended for a 3 axis stage like a CNC mill or router. I would use a large enough router that the person can recline on the machine table. The camera/laser equipment would be in place of the spindle. Range and  resolution would depend on your needs.

Scan3d does not handle rotating axes at this time but there is no reason it cannot do so in the future. 

Tom Hubin
thubin@earthlink.net

7
Video P*r*o*b*i*n*g / Re: New 3d Video Probe
« on: August 04, 2009, 07:40:04 AM »
Hello Eero,

> Attached the last calibration files and a snapshot of camera settings.

Thanx. They look good.

> The laser is at max brightness and I changed my pin point to a thinner one as well
> As you can see the exposure is about 0.23 ms.

Much nicer calibration data in your Scan3dCal.pts file.

>The scanning order is Y 1, X 2 and Z 3.

Thanx. I just needed to be sure that scanning order was not the problem.

> After calibration scan, some calculations are done.
> Where is this resulting information stored?

The coefficients are internal but will be saved in the in the ini file. Click the Save button then look at the file with a text editor. You will see that they have changed.

> The properties window of scan3d.ini shows it not being changed since I
> stored the calibration settings BEFORE the calibration started. Is there
> some button that I'm missing to complete and store the data?

that data would normally change but does not work properly in your version. I have corrected that and am testing in case other things got messed up. I will run your data late today. Then probably send you the latest plugin.

Meantime, try scanning a part to see if the calibration is working correctly now.

Tom Hubin
thubin@earthlink.net

Eero
[/quote]

8
Video P*r*o*b*i*n*g / Re: New 3d Video Probe
« on: August 02, 2009, 09:57:37 PM »
Hello Eero,

Thanx for the files. I will download and examine later.

The laser should be used at maximum brightness. That way it is much brighter than any ambient lighting. The camera exposure is set short enough so that the laser does not saturate the ccd array. This usually leaves everything else black, which is exactly what you want. You only reduce the laser if it still saturates the camera on the shortest exposure time.

With a 5mw laser spread over a line, that is not likely to happen with the Watchport camera. The shortest exposure time is something like 1/30 ms.

You may have to increase the exposure time to see things while you are setting up. But before you run the scan you need to reduce the camera sensitivity so that it only sees the laser.

If some ambient light is still appearing at a low level then set the plugin's threshold a little higher than the ambient light and it will be removed from the data set.
 
Calibration is slow right now because it scans the entire volume looking for the point. Later I will make the calibration scan smarter so that it scans the volume but always near the point.

I set the camera for compression to speed up communications. Should work either way. All the other settings are factory defaults, I think. I will look to be sure.

Does your camera lens wiggle around. All of my lenses are loose fitting. I could not find a suitable commercial M12-0.5 nut to lock the lens down so I made some Delrin nuts. A compression spring between the lens and the camera body might also keep the lens from shifting position if you don't touch the lens.

Tom Hubin
thubin@earthlink.net
 

9
Video P*r*o*b*i*n*g / Re: New 3d Video Probe
« on: August 01, 2009, 07:26:43 AM »
Hello Eero,

Just letting you know that I am working the problem.

I have corrected the FOV, elevation, and azimuth calculations. They are not causing your problems but certainly contributed to confusion. Those numbers are intended to be a reality check on your setup and calibration. They are not used anyplace. Just displayed.

It appears that calibration calculations are ok but the pts file data is not.

Look at the pts file with a text editor like MS Notepad or MS Word or a spreadsheet. The 9 columns of numbers are  x, y, z, h, v, gray, red, green, blue.

The green column has lots of values of 255. That is the max possible value. That means the actual value is between 255 and infinity. You need to reduce the camera exposure time so that you get few, if an, values of 255. You can view this on the fourth display. The green graph should not reach 255 (dotted line).

When it does reach 255 it tends to have several neighboring pixels also at 255. This makes it hard to accurately decide which one is the peak value. This will effect accuracy but is not your major problem.

The big problem is very strange. Often where the z value changes the y value will be nonsense for a line or two. These odd data lines will be included in the calculations. I don't know yet where they are coming from but I will be working it. I will need to setup my axes like yours and do them in the same order to see if it is a motion issue. May get to it Sunday night or early in the week.

Meantime, can you try cal again with z last? That would be y first and x second.

Please send me an email address if you want a test version of the plugin when I think it is ok enough for you to try.

Tom Hubin
thubin@earthlink.net

10
Video P*r*o*b*i*n*g / Re: New 3d Video Probe
« on: July 31, 2009, 02:39:04 AM »
Hello Eero,

> Finally I got together some pictures, calibration files and a few scans,
> that you wished to see.

Thanx. Lots of good detail. Can you post the full plugin box and not just the numbers. I would like to see what the camera is seeing and how it is being interpreted.
 
> The hardware:

> In addition to the original WatchportV3  4.9mm lens, I got a set of four
> 3.6 - 6 - 8 and 12mm on eBay
>  urrl=http://cgi.ebay.com/Any-4-pcs-3-6-6-8-12-16mm-CCTV-board-Lens-set-dome_W0QQitemZ370158397420QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item562f28e7ec&_trksid=p3286.c0.m14]http://cgi.ebay.com/Any-4-pcs-3-6-6-8-12-16mm-CCTV-board-Lens-set-dome_W0QQitemZ370158397420QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item562f28e7ec&_trksid=p3286.c0.m14[/url]

> In my setup I prefer the 8mm to the 12 mm lens because I need to have
> the FOV cover a Z variation of about 28 mm in one frame.

You will get less distortion from the lens if you use a longer focal length. This will provide a smaller angular field of view. Move the camera farther back to increase the linear field of view.

> The calibration setup:

> In the following photos you can see the needle fixed on a black velvet to get
> rid of all unwanted light or reflections that could disturb the calibration. The
> needle is about 40mm long and 0.8mm thick, painted matt black and touched
> on the point with white chalk, that lights up like a small LED when hit by the
> laser. In Scan3D intensities graph it gives a very clean and lonely peak. In the
> room there is no light and I even turn off the computer screen for the duration
> of the process. - And as "scanner" moves to the starting point of the calibration
> scan, I get out of the workshop myself. Not much fun to stay in the dark for
> hours... ;)

I use a Watchport/v2. Your v3 settings should be similar.

Most of that light reduction should not be necessary. You probably are using the default exposure control for the camera. Likely set to "average" so the exposure time self adjusts to the max of 0.25 seconds since you turned out all of the lights.

Set your exposure time (use Source/advanced controls) to off or manual. Then adjust the exposure time to 1ms or maybe 2ms. Most of the camera display, due only to ambient light, will go black or at least very dark. The needle point should still show up as a spike in the fourth display. You want the exposure time to be short enough that the spike does not quite reach the top of the display. That is when it is sensitive enough to get a good signal but not so sensitive that the signal saturates.

> As you guessed all the values I use are in mm.

Untested in mm but it should work ok. On that note, Art did MachCloud for mm but said that it should work for inches. It does not. So I guess I need to test in mm to see if there are surprises.

> The orientation for Watchport is 180 and for QuickCam 0 degrees.

I see that you properly rotated the display for each of these cases.

> And some questions:

> My second question in the last post wasn't very clear. Sorry.
> The camera-laser setup is fixed. This system has two planes: the laser plane
> and one perpendicular to it following the lens axis.

The photos look like the proper arrangement.

> If the laser line is parallel to X-axis this second plane is normally following
> the Y-Z plane. This is only very difficult or impossible to set precisely. Does
> it matter?  If I understand this needle point process right - any deviation
> of this would be corrected by the calibration, wouldn't it ?

That is the plan. However, you need to be sure that after you have calibrated the camera and laser that the setup does not change. You can remove it and put it back without recalibration if you can put it back repeatably enough.

> Could you tell me what are the values of Elevation and Azimuth correspond to?

Elevation on mine is 60 degrees. That is the angle the laser plane makes with the XY table plane. This number is based on the calibration data collected and should be close to your design. That would be 45 degrees for your setup.

Azimuth is the angle that the laser line on the XY plane is rotated about the Z axis. That should be zero degrees for my design and calibrates to about 0.5 degrees. Yours should be close to 90 degrees.

Since your numbers are the defaults I would say that calibration has not been done. Are you using the latest version of Scan3d? I posted two versions.

> Attachments:
> For some reason, as long as I have the camera connected, I can't get
> any of my computers to get a complete "Print Screen" from your Scan3D
> window - I only get the leftmost "camera view" window. To show the
> intensity graph of my calibration "needle point" I had to take a photo of
> my screen.

Collecting camera data is done by using the clipboard. Would not be my choice. Just the only way I could get it to work using VFW (Video For Windows). Saving the screen also uses the clipboard so it interferes with camera data collection. That is primarily why I added a Pause button.

Pause the video, snapshot your screen and post in graphics software like MS Paint, then you can Resume the video. If you want to do this in the middle of a running job you would want to first pause the motion. That might be sufficient or you might still have to pause the video.

> I send you two sets of zip files - one for each camera. They include:
> scan3d.ini, Scan3dCal.pts and an example Scan3dCloud.xyz - plus
> three photos: lower part of the Scan3DWindow and two pictures of the
> example PointCloud showing the distortion mentioned in my previous post.

> Too many attachments... You'll find the zip files in the following post below.

Got them all. The only thing I see that is odd is that you scan the Z axis second and not last. Since your object is pretty flat (or am I incorrect) I would be inclined to scan Z last.

The FOV numbers of 1 and 1 are the default of 1 inch by 1 inch. In your case that is 1mm x 1mm . This tells me that you have no calibration data. Yet I can see that you have the cal data points in the cal file and you have values for the calibration coefficients in the ini file. So something is not right. I'll have to dig into that.

Tom Hubin
thubin@earthlink.net

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 »