Look at how the system variable Convergence Tolerance (TOL) affects the results of definite integrals. This tells scikit to stop searching for a minimum (or maximum) once some tolerance is achieved, i.e. What is tolerance Tol?Īs you noted, tol is the tolerance for the stopping criteria. In your case you want to print letters so if you use %f formatspec you won’t get the desired result. %s represents character vector(containing letters) and %f represents fixed point notation(containining numbers). Then ismembertol uses the resulting scaled comparison tolerance to determine which elements in A are also a member of B. ismembertol scales the tol input using the maximum absolute values in the input arrays A and B. =uniquetol(vals,uniquetolval,'ByRows',true.Tol - Comparison tolerance Comparison tolerance, specified as a positive real scalar. Uvals=round(sv/uniquetolval)*uniquetolval Uvals=round(vals/uniquetolval)*uniquetolval I believe all three are producing correct answers, but uniquetol works differently in that it tests each cell in a row to be within a tolerance, whereas Timbo's approach and the summing approach examine the entire row. Note that an extra unique row is produced by Timbo's approach and the summing approach. Below is a summing approach below that builds on Timbo's code. Right now, the bottleneck is uniquetol, which cannot be run in parallel, so that's what I've focused on. I'll note that I wasn't able to do that two years ago either, but I'm hoping sometime smarter than I am will be able to (i.e., on this thread).įor now, I don't agree that unmixing takes too much time since it can be efficiently parallelized. Down the road, with the right fast clustering algorithm, I think we'll be able to apply a look up table answer to each scene and skip the iterative solving, but we are not there yet. The problem is: can we speed up SPIReS? Machine learning, LUTs, GPU arrays, and other speed ups are good research directions, but I have yet to see any evidence that they work better or faster than simpler methods for clustering. “Life is really simple, but we insist on making it complicated.” – Confucius Thoughts? This topic applies to other retrievals besides snow. An alternative is to assert that the superpixels span the range of variability and to use a statistical fit (“machine learning”) to assign the snow properties to each pixel. Then the question is, what to do with ‘em? One approach is to say there are only 2500 answers and for each pixel pick the closest superpixel and assign that fSCA and albedo to it. Depending on options, I get something like 65,000 superpixels, but running uniquetol on these at a tolerance of 0.05 for the weighted spectra, I reduce to 2500-3000, which I can unmix on my laptop. David Thompson’s Mars stuff just used a one-band image, the brightness (Euclidean norm) of the spectral reflectance. For a typical AVIRS-NG image, I have various ways to create a 3-band image that superpixel can handle, and Timbo has identified other ways. I’d suggest we combine the ideas with superpixels. Rosenthal & Dozier 1996 realized this, and Walter built a regression tree based on a sample of pixels. The problem is: unmixing every pixel takes too much time, but many pixels are similar to other pixels, so we shouldn’t have to unmix all. We could create a large training library of unique pixels form many tiles and days with the current "slow" combo of uniquetol + speedinvert, then train the ML and for future "fast" runs, run the statistical fit / ML approach on all pixels, which could be fast enough to just use ML on all - for GPU arrays, once Aaron gets MATLAB 2021a fired up on tlun, speed might improve with the new updates to matlab handling the new NVIDIA cards. We'd need a quick way to find good pixels/superpixels for estimating those properties.Īlso, we have plenty of time. But, unless all the component pixels have high fSCA we may be reducing our ability to estimate grain size and contaminat conc, by the superpixels fSCA going below the thresholds to estimate those snow properties. I'm guessing our fSCA estimates from MODIS super pixels are going to be similar to the aggregate of the fSCA estimate from the individual pixels. The trick with using superpixels and MODIS data is the spatial resolution of MOD09GA is quite coarse so we don't want to make the superpixels too big. also there is a superpixels3 (3D superpixels) funciton - we could stack a few days together and find similar pixels across multiple days - probably just a few a most. For speed, I recommend we use the matlab function. I was doing some superpixel stuff with both funcitons yesterday on Landsat images and the results are pretty dang similar. The built is superpixels funciton is a MEX file and runs very fast. The matlab built in superpixel function can only handle 1 or 3 features per pixel for clustering, my method enables as many features as you want, but is slow.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |