Wednesday, June 25, 2008

Solutions to DIFFERENTIAL GEOMETRY: A First Course in Curves and Surfaces

 

DIFFERENTIAL GEOMETRY: A First Course in Curves and Surfaces.

Theodore Shifrin.

It's a very concise and clear book. Very good for people who just want basic understanding of curves and surfaces. It saves a lot of blabla but covers every idea basic and important.

I only do those questions that's interesting to me, please don't copy it for your homework.
If you find better solutions, please let me know, thanks.

 

Solutions to Excercises 1.2, Question 13 to 18.






Solutions to Excercises 2.1, Question 4, 5, 8, 10

  Q4   Q5 Q8 Q10

Monday, May 19, 2008

Worth Reading

PAMI

Volume 30, Issue 5

A Factorization-Based Approach for Articulated Nonrigid Shape, Motion, and Kinematic Chain Recovery from Video

1. Structure from Motion: reconstruct shape information from motion trajectories.

2. Previous methods assume that kinematic model is known for the articulated object.

3. Considering only affine transform, two points move independently whenever their motion trajectories matrices have full rank. If they lie on an axis (bone), then their motion matrix lose two ranks, if they are linked by a joint, then they lose one. This is the basic principle for the paper. The paper tries to find the interest subspaces of the motion matrices to identify joints and axis, so it can reconstruct the kinematic (skeleton) model.

4. Another application of factorization.

Nonrigid Structure-from Motion: Estimating Shape and Motion with Hierarchical Priors

1. Factorization-Based methods for Shape from Motion are very sensitive to noise.

2. The paper uses probabilistic methods to deal with the noise.

3. Instead of making a single prior distribution for the latent coordinates, the paper use two level of prior information, one is that the latent coordinates is Gaussian like (so the deformation of objects should be similar), the second prior information is about the motion.

4. On synthetic noiseless data, closed form solutions work well and the probabilistic solution might gets stuck on local minima, but for noisy data in the real situation, probabilistic method produces best result.

5. I think the main point of this paper is that it proposes a probabilistic framework and optimization algorithm for Shape from Motion problem.

Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures

1. Introduction. Dynamic Textures can be described as a stochastic process with a model of linear dynamical systems (LDS), something like the model used by linear Kalman Filtering and ARA. The paper extends this model by mixing them using a latent variable to describe the class this Dynamic Texture could be sampled from.

2. An EM algorithm is proposed to train the parameters.

3. Relation to Adaptive Filtering and Control, especially Kalman filtering and LDS is discussed.

4. The algorithm can be used in Video Clustering and Motion Segmentation.

5. For Time-Series Clustering, it is tested against three multivariate-series clustering algorithms like PCA subspace similarity, KL divergence, and Chernoff measure using synthetic data.

6. It is also tested on Real Video like fountain scene, highway traffic and pedestrian scenes.

7. This work studied a principled probabilistic extension of the dynamic texture model. Classic dynamic texture represents a single sample from a linear dynamic system space, mixture of them can represent a group.

8. It has a system identifiably problem witch I don’t understand by now.

Video Behavior Profiling for Anomaly Detection

1. Introduction. The paper addresses the problem of identifying anomalies in a CCTV video and it’s not necessary to classify them. And the paper shows that using unlabeled data is superior to using manually labeled data for the job. Manually labeling a surveillance video into different classes could be tedious and subjective.

2. The framework used in the paper includes: 1) A scene event-based behavior representation, which is different from the approaches based on object tracking. This avoids the difficulty of tracking targets under occlusion in noisy scences. And Each behavior pattern is modeled using a Dynamic Bayesian Network, which provides a suitable means for time warping and measuring the affinity between behavior patterns. 2) Behavior profiling based on discoering the natural grouping of behavior patterns using the relvant eigenvectors of a normalized behavior affinity matrix (Spectral clustering). 3) A composite generative behavior model using a mixture of DBNs. 4) Online anomaly detection using a runtime accumulative anomaly measure and normal behavior recognition using an online Likelihood Ratio Test (LRT) method.

3. Video is first separated intro small segments using static frames between them and a Event-Based Behavior Representation is extracted as in reference 8 (S. Gong and T. Xiang, “Recognition of Group Activities Using Dynamic Probabilistic Networks”

4. Discussions and Conclusions: 1) The experiments show that a behavior model trained using an unlabeled data set is superior to a model trained using the same and labeled dataset in detecting anomaly from an unseen video. The former model also outperforms the latter in distinguishing abnormal behavior patterns from normal ones contaminated by errors in behavior representation. A model trained using manually labeled data may have an advantage in explaining data that are well defined, but it does not necessarily help a model with identifying novel instances of abnormal behavior patterns as the model tends to be brittle and less robust in dealing with instances of behavior that are not clear cut in an open-world scenario. 2) The eigenvector selection-base spectral clustering algorithm is able to discovering the natural grouping of behavior patterns. 3) The online LRT-based normal behavior recognition method is superior to the conventional ML-base method.

5. Problems. Lack of semantic information. There is still a long way to go toward a general-purpose anomaly detection method that can be applied to any type of scenarios.

6. Questions. How to use domain knowledge? How to speed up the learning process with supervisions from human? How knowledge about one domain can be generalized to others?

Riemannian Manifold Learning

1. The paper focuses on Manifold Learning. Existing algorithms are ISOMAP, LLE, Laplacian eigenmaps, Hessian eigenmaps, semidefinite embedding, manifold charting, local tangent space alignment, diffusion maps, and conformal eigenmaps.

2. The problems that manifold learning algorithms have are:

a) Metric preserving problem. ISOMAP can do it by unfolding the manifold onto a flat plane and try to preserve the metric, but this would not be possible for surfaces like sphere, whose Gaussian curvature is isometry invariant and not zero.

b) Cost averaging problem. When unfolding the manifold, many existing algorithms use a uniform cost for every point on the manifold, this could lead to undesired shape of the unfolded. The paper proposes a local optimization procedure instead of a global one to avoid the problem.

c) Incremental-learning. Indeed, previous manifold learning algorithms all use batch learning, i.e. the training samples are consumed at once, adding a new sample would require redoing the whole learning process.

d) Global optimization. Global optimization is expensive.

3. Assumptions. Currently, the paper made quite restricted assumptions and they only demonstrate on 3D face data so far. For simplicity, only varying poses and lighting conditions are considered in this model, as they are the most important factors in face recognition. We require that the distance from the camera to the face and the focal length of the camera are fixed so the acquired images have similar scales. The camera is allow to rotate around its optical axis.

Monday, May 12, 2008

Worth Reading

PAMI

Volume 30, Issue 1

Groups of Adjacent Contour Segments for Object Detection

1. Introduction. Many existing recognition algorithms use local patches (color cue) instead of contour features. This paper proposes a scale-invariant local shape features formed by chains of k connected roughly straight contour segments.

2. The shape represented by k-AS increases with k, as 2-AS can be L shapes, 3-AS can form C, F, and Z shapes.

3. Following a bag-of-features paradigm, they construct a codebook of k-AS types

4. Related Work. Problem to existing solutions: clutter and restricted class of shapes. A natural thought is that people can use boosting to select edge patterns for classification (not scale invariant and tailored to specific class, dataset).

5. Split the detector window into separated tilts and compute histograms in each tilt so as to form a descriptor.

6. Use of Integral Histograms for speeding up the sliding-window detector based on kAS histogram.

7. Conclusion. Future work, Multiple View. Rotation invariant. More complex framework.

Locally Rotation, Contrast and Scale Invariant Descriptors for Texture Analysis

Yet another feature descriptor.

Mixture of Spherical Distributions for Single-View Relighting

1. Assumptions. Given a 3D model of the object, the method can recover the direction and intensity of ultiple light sources and the number of light sources and specular reflectance property of the object. Previous methods assume 1) all light sources are infinitely distance (directional) 2) the geometry of the target object is known, 3) number of light sources known.

2. To estimate the parameters for the light sources, the paper use EM algorithm.

LEGClust -A clustering Algorithm Based on Layered Entropic Subgraphs

1. This clustering algorithm uses local structure information of the data, so it can be used to cluster Swiss Roll like data.

Mutual Information for Lucas-Kanade Tracking (MILK): An Inverse Compositional Formulation.

Classical LKT algorithm uses SSD as the distance measure for two images, this paper proposes Mutual Information as a different measure. And it also propose an efficient algorithm based on Back Composition. Source code available.

Sunday, May 11, 2008

Notice the Invariance Property of Maximum Likelihood Estimation!

Mood, Alexander McFarlane, Introduction to the theory of statistics. McGraw-Hill, 1974. pp 286.

Friday, May 09, 2008

All comps passed, GPA 4.0, not bad.

Hope I can pop out some publications before the end of this year.

Another Brain Workout. Matrix Rank Problem

Thanks to Prof. Israel who helped me out twice.

**********************************************Wei
Suppose X is a random data matrix with dimension n by t

M=inv(X ' * X), i.e. M is the inverse of product of (transpose X and
X)

and P=I-X * M * X'

I is an identity matrix (n by n), is rank (P) = n-t?

I wrote a testing MATLAB program to verify it.

%%%%%%%%%%%rank_test.m
for n=7:15
for t=2:7
X=randn(n,t);
M=inv(X'*X);
I=eye(n,n);
P=I-X*M*X';
r=rank(P);
fprintf('the rank of p at n=%d, t=%d is %d\n',n,t,r);
end
end

And the result IS correct:
the rank of p at n=7, a=2 is 5
the rank of p at n=7, a=3 is 4
the rank of p at n=7, a=4 is 3
the rank of p at n=7, a=5 is 2
.........................

%%%%%%%%%%%%%%%%%

So far, I vaguely feel this identity is right by calculating the trace

tr(P)=tr(I)-tr(X * M * X')=tr(I)-tr(X'*X*M)=n-t

But is there any rigorous calculation or proof that anybody knows?
Thanks
*******************************************Robert Israel
writes:
> And, I verified that X does not have to have full rank by setting some
> rows of X to trivial one. It doesn't affect much except for occasional
> MATLAB complaining for singularities.

Rank testing using floating point arithmetic is dangerous.
> On May 8, 11:05 am, wrote:
> > Hum.... There ARE some exceptions. I think it's due to the numerical
> > subtlety.

> > On May 8, 10:58 am, wrote:

> > > Suppose X is a random data matrix with dimension n by t

> > > M=inv(X ' * X), i.e. M is the inverse of product of (transpose X and
> > > X)

... which of course requires X to have rank t.
> > > and P=I-X * M * X'

> > > I is an identity matrix (n by n), is rank (P) = n-t?

Note that PX = X - X M X' X = 0, while if Px = 0, x = X M X' x.
Thus Ker(P) = Ran(X). P is the orthogonal projection on the
orthogonal complement of Ran(X), so its rank is n - rank(X).
**********************************************Wei
OK. Dumb as I am, I think I've figured out your solution:

Because PX=0, Thus Ker(P) \inc Ran(X)

And, because for any x, Px=0, x is included in Ran(X)

so Ker(P) is included in Ran(X)

Thus Ker(P)=Ran(X).

Because of theorem (Dimension of kernel of A ) + (dimension of range
of A )=(dimension of U).

We get the answer.

That's smooth. I feel good.

Tuesday, April 29, 2008

Worth Reading

J. Ponce, et al. Dataset Issues in Object Recognition
comment: this paper provides a very good review of part-based objection recognition algorithms and it reveals some problems of existing algorithms: the dataset they use tend to be oversimplified. And it points some more difficult real world dataset for further research.

Monday, April 28, 2008

Mardia. Multivariate Analysis.

Exercise 3.3.3

CG Final Project: Augmented Reality

To simply the project, Dr. Eraldo let us to use Camera Calibration Toolbox for MATLAB and its dataset for the experiment, so we can use the calibration results directly. The toolbox works fine, but a tricky problem drove me crazy.

The basic idea is using the calibration results to:
(1) set up the OpenGL intrinsic parameters for the camera
(2) use the extrinsic parameters to pose the camera.

For problem (1), I tried to use ARToolkit's argConvGLcpara2 function to convert it into OpenGL projection matrix format. Then, I use the simplified version, using glFrustum instead.
Suppose the width and height of the image is 640, 480 and the focus length is 600, then we can safely set the frustum as
glFrustum(-320,320,-240,240,600, 100000);

Note that ARLib use a different unit! It normalize everything by the focal length. I spend a hell lot of time to figure this out.

This simplification works for modern cameras, whose skewness and aspect ratio and central deviation is very small.

For problem (2), it seems easy to load the GL_MODELVIEW matrix with some calculation. But with the data output from the Camera Calibration Toolbox, no existing formulations are right.

It's because that in this TOOLBOX, the camera coordinate system is different from OpenGL, where the z-axis is pointing on the opposite direction of the camera. So anther transformation should be considered to reverse z-axis and y-axis. I simply pull out a paper and worked out all the math to manually calculate the relationship between the view point, view direction and up vector to solve the problem.

















A pitfall that bothered me a whole weekend.

This question looks easy at the first glance, the problem is that I worked out two paradox solutions. Fortunately, I figured it out with the help of Ray Vikson in Google group sci.math:

******************************************************Wei
Question: x1, x2, x3 are I.I.D. samples from normal distribution
N(0,cosi^2).

y1=x1+x2, y2=x2+x3, then what is the conditional distribution of y1
given y2, i.e. p(y1|y2)?

I work out two solutions, but they contradicts, can anybody help me to
figure out why?

Solution1:

Given y2=a, then x2=a-x3, so y1=x1+a-x3. So given y2=a, it's easy to
verify that y1 is still a normal distribution
N(a,2*cosi^2).

Solution2:

vector [x1 x2 x3]' is a multivariate normal distribution and its mean
is [0, 0, 0]' and covariance:
| cosi^2 0 0 |
S= | 0 cosi^2 0 |
| 0 0 cosi^2 |

vector [y1 y2]'= A * [x1 x2 x3]'

A is a matrix
| 1 1 0 |
| 0 1 1 |

So, according to the theorem (Mardia, Multivariate Analysis, 3.2.1,
3.2.4) , a linear transformation of multivariate normal distribution
is still a normal distribution and it's mean and variance matrix can
be obtained by:

mean=A* [0,0,0]'=[0,0,0]';

variance=A*S*A'
| 2* cosi^2 cosi^2 |
= | cosi^2 2*cosi^2|

And given y2=a, y1 is still a normal distribution, and the mean and
covariance is

mean = u1 + Sigma21 * invert(Sigma 22 ) * (a -u 2) = 0 + 1/2 * (a
-0)=1/2 a?????? which is not equal to solution 1???

And the same problem happened with the variance matrix.

How could this happen? And which solution is right?

Please help.

Thanks

*************************************************Ray Vickson
et's eliminate the constant cosi^2, because it serves no useful
purpose (and anyway it is not clear whether this should be cos(i^2) or
cos(i)^2); thus, take Xi ~ N(0,1). Starting from the multivariate
distribution of (X1,X2,X3) and using standard methods (I used moment-
generating functions, but there are other ways) we get the
multivariate distribution of (Y1,Y2) as
f(y1,y2) = C*exp(-(1/6)[y1,y2]M[y1,y2]'), where [y1,y2]' = column
vector and M is the matrix M = [[2,-1],[-1,2]] (that is, row(1) =
[2,-1] and row(2) = [-1,2]). The variance-covariance matrix is [[2,1],
[1,2]], whose inverse is (1/3)M. We have C = sqrt(3)/(6*pi). The
conditional distribution of Y1, given Y2 = y2 is f(y1|y2) = f(y1,y2)/
f_Y2(y2), where f_Y2(y2) = integral(f(y1,y2) dy1, y1=-inf..inf) = exp(-
y2^2/4)/(2*sqrt(pi)), so we get f(y1|y2) = exp(-(2y1 - y2)^2/12)/
sqrt(3*pi). This gives E(Y1|Y2=y2] = y2/2 and var(Y1|Y2=y2) = 3/2
(letting Maple 9.5 do all the computations).

So, Solution 2 appears to be correct. This leaves the question: what
is wrong with Solution 1? That is a good question, and I don't see the
answer at the moment. I am pretty sure that Solution 2 is OK because
it's obtained using known, well-proved formulas applied in a detailed
step-by-step manner.

R.G. Vickson

*************************************************Wei

Thanks. Ray. I think I figure out this problem now.

Solution 1 is flawed, the reason is:

given y2=a, x2=a-x3, so y1= x1+a-x3. No problem, but this time the
distribution of x3 is not N(0,1) anymore. It's p(x3|y2=a) and the
variance is bounded somehow by this condition, and the conditional
distribution of x3 is like N(0,1/2). p(x1|y2=a) is still N(0,1)
because x1 and y2 are independent.

So y1 is NOT what I assumed.

Thank you. This paradox bothered me for a whole day!
***************************************Ray Vickson
Of course! I could kick myself for not seeing it. This is yet another
instance showing how careful one must be when using probability
arguments, and that is why the detailed, long-winded approach is
sometimes the best.
R.G. Vickson

Worth Reading

Probabilistic Automata: Vidal et al., IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 7, JULY 2005

Presentation Paper for Formal Language class.

comment: The probabilistic extensions of NFA and CFG is reviewed in this paper. It's extensive and well organized.
The extension to Tree Automata is difficult to catch. Actually, considering the data that NFA or even Turing Machine accepts are 'strings', or sequence of characters, as aba and baa are considered different. But this structure is simple.
So how about adding stronger structure to it? Like tree structure? So the trees formed by characters (labels as called in the paper) will be processed by an Automata.
For a classical automata, characters in the string are eaten one by one, and the Automata changes its state correspondingly. For a tree automata, every subtree is eaten first before a new node can be eaten, and because the number of subtrees can vary between the nodes, so the dimension of the domain of transition functions is variable accordingly.

The paper Using Regular Tree Automata as XML schemas provides a good example of its application to parse XML documents (which is indeed instances of tree data). Although tree automata is interesting, I don't think it extends much on NFA, because every tree corresponds a string by ordered traversing.

Wednesday, April 23, 2008

Worth Reading

Ariadna Quatoni et al. Conditional Random Fields for Object Recognition

comment: Conditional Random Field with Hidden Variables. They use Conditional Random Field for Object Classification in an ad-hoc way. The result and methods are still preliminary.

Wednesday, April 09, 2008

Fast ND multidimensional gaussian mixture random generator

http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=17489&objectType=file

After fixing a small bug in the code, I found this program works well to generate multidimensional mixture-gaussian distribution.

L. Greengard and V. Rokhlin, A Fast Algorithm for Particle Simulations. 1987
comment: A super clear paper on Particle Simulations. After you read this paper, you can clearly see that KDE is actually a special case of this algorithm.

Monday, April 07, 2008

Worth Reading

William T. Freeman and Edward H. Adelson. The Design and Use of Steerable Filters. IEEE PAMI
comment: Kinda old, but nice tutorial for steerable filters and Gaussian derivative filters.

Jerry Jun Yokono, Tomaso Poggio. Oriented Filters for Object Recognition: an empirical study.
Jerry Jun Yokono, Tomaso Poggio. Rotation Invariant Object Recognition from One Training Example. Tech Report 2004.

comment: Using dominant orientation information to achieve rotation invariant.

Sunday, April 06, 2008

Tuesday, April 01, 2008

Worth Reading

Stephen Della Pietra, Vincent Della Pietra, John Lafferty. Inducing Features of Random Fields. PAMI. 1997

This paper is quite theoretic and uses several important theorems of Mathematical Analysis. Well written and very readable.

page 4. Proof of proposition 1 and proposition 2 is not so straightforward as declared in the paper. So I just work it through step by step and here is the scanned version of my manuscript.
It's interesting to see in proposition 2 that the problem can be simplified into Bernoulii model.

Is there anyway that we can share our comments for papers?


Appendix I DUALITY. Lemma 4


4.b Auxiliary functions, Lemma3. One should notice the following theorem in Analysis:
For any compact set S and a sequence A in S, A must have a cluster point in S.

In proof of Proposition 5: because the sequence q^k belong to Q, so its cluster point must belong to the closure of Q.

Since Improved iterative Scaling algorithm is based on a bound function A(r,q), I strongly doubt that they could find an even better boundary function and further improve the convergence speed for the algorithm and I doubt that this problem could have already been solved somewhere in the optimization context.

Thursday, March 27, 2008

Mardia, Kent, Bibby. Multivariate Analysis




Question 1.6.1 and Solution.
Pretty easy, but worth hands on practice.








Solutions to Question 2.2.1 to 2.2.5

Tuesday, March 25, 2008

A couple of troubles

OpenCV doesn't work well on Vista.
Solved, don't link to highguid.lib.

Function icvCalcOpticalFlowLK_8u32fR doesn't support window size greater or equal to 16. Optimization of code is so difficult to read and modify, so just abandon it.

LK optical flow algorithms in OpenCV and UCF's matlab code produce different results. I have to further investigate what's going on here.

Saturday, March 22, 2008

Problem of illumination alignment


Windows Live photo album provide free photo stitching function, it works well, but it seems that uneven exposure across the images, especially when the sun is in the image would cause problem.
I doubt that sun suppress the image dynamic range, so it's pretty hard to align the illumination.

Sunday, March 16, 2008

Worth Reading

Bruce D. Lucas. An Iterative Image Registration Technique with an Application to Stereo Vision. 1981
comment: Expand the error function using Taylor's equation. Simple yet effective method.

Friday, March 14, 2008

Worth Reading

David A. Forsyth et al., Computational Studies of Human Motion: Part1, Tracking and Motion Synthesis.
comment: a good book to review the works in this area.

See P108, this book gives an excellent example of a simple non-linear system dynamic demonstrating multi-model behavior .I managed to verify this model by the following MATLAB script:

X0=randn(10000,1);
k=100
epsilon=1e-3;
for ii=1:k
X0=X0+epsilon*sin(X0);
end
hist(X0);

As k is small (=100), the system is still uni-model.

k=10000, there's two models

A solution to Question 5.1.8

John E. Hopecroft, Introduction to Automata Theory, Languages, and Computation. 2nd Edition.

Consider the CFG G defined by productions

S->aSbS|bSaS|e

Prove that L(G) is the set of all strings with an equal number of a's and b's.

The point of inductive proof is that for arbitrary string s with equal a's and b's longer than k,k>=1, we can always find substrings x,y such that s=axby or s=bxay and x,y has equal number of a's and b's respectively.

You can click on the following image to see a zoomed version if it's too small. (Please don't copy it for your homework)


My proof here is clumsy, hope someone would show me an elegant one.

I found another solution by accident today (March 21, 2008) at http://www.cs.iit.edu/~cs532/HW/HW_04_Solution.pdf

The idea is almost the same, and it's also wordy.

Wednesday, March 12, 2008

Office Hour

Know, that I am one among the heretical Jann and I sinned against Sulayman, David son (on the twain be peace!) I together with the famous Sakhr al Jinni;"[FN#71] whereupon the Prophet sent his minister, Asaf son of Barkhiya, to seize me; and this Wazir brought me against my will and led me in bonds to him (I being downcast despite my nose) and he placed me standing before him like a suppliant. When Sulayman saw me, he took refuge with Allah and bade me embrace the True Faith and obey his behests; but I refused, so sending for this cucurbit[FN#72] he shut me up therein, and stopped it over with lead whereon he impressed the Most High Name, and gave his orders to the Jann who carried me off, and cast me into the midmost of the ocean. There I abode an hundred years, during which I said in my heart,

"Whoso shall release me, him will I enrich for ever and ever." But the full century went by and, when no one set me free, I entered upon the second five score saying,

"Whoso shall release me, for him I will open the hoards of the earth." Still no one set me free and thus four hundred years passed away. Then quoth I,

"Whoso shall release me, for him will I fulfil three wishes." Yet no one set me free. Thereupon I waxed wroth with exceeding wrath and said to myself,

Sunday, March 09, 2008

Topology and Borel Sets

Borel Sets is the smallest cosi-algebra of a topology, so it contains more than the 'open sets' of topology.
In Rudin, Real and Complex Analysis, proof of a lot of theorems start with the 'open sets' and then use the bounding open sets and compact sets to prove for the other sets.
Say, for arbitrary Borel set E, find a compact set KE, s.t. u(V-K)=0. So K and V are actually very tight bondings to the measure of E.
Then this bounding can be used in inequalities. See Riesz Representation Theorem and theorems 2.17, 2.18.

Saturday, March 08, 2008

Rudin, Real and Complex Analysis

p44

By Urysohn's lemma, there exists f belongs to Cc(X) s.t. f(x)=1 on K1, f(x)=0 on K2....

comment:

One should noted a property of Hausdorff space, disjointed compact sets of a Hausdorff space is always separated, which means for K1 and K2, there are open sets V1 and V2 that K1 included by V1, K2 included by V2 and V1 and V2 has no common points. Then Urysohn's lemma can be applied and the preceding conclusion can be drawn.

further comment:

This book should be read in parallel with Halmos' book and an introductory topology textbook.

Friday, March 07, 2008

Rudin, Real and Complex Analysis

p35

Moreover, it turns out that the euclidean properties of Rn play no role in the proof; in fact, if one thinks of them too much they just get in the way....

Ye, right, that's why they are constantly creating new concepts and words to keep things simple

Thursday, March 06, 2008

Worth Reading

Lena Gorelick et atl. Shape Representation and Classification Using the Poisson Equation.

comment: A new transform of silhouette other than Distant Transform.
How can these guys find so many ideas from Physics, Maths and other disciplines?
Am I suppose to be a know-it-all to finish my PH.D.?
To dig out some fundamental theories is more fun, but it's a daunting task.

Moshe Blank, Lena Gorelick et al. Actions as Space-Time Shapes.
comment: Lena Gorelick

Eli Shechtman, Matching Local Self-Similarities across Images and Videos
comment: A quite interesting way to take self-repeating pattern as a feature. So texture information is ruled out.

Thursday, February 28, 2008

Worth Reading

Ross. Kindermann, Markov Random Fields and Their Applications.
comment: good, yet outdated book for introducing the physical background of Markov Random Fields.

Peter Clifford. Markov Random Fields in Statistics
comment: A readable and principled tutorial for Markov Random Fields. But you need to work through the maths equations carefully (Fortunately I'm good at it). The polygonal coloring measure and the following bla-bla is quite technical (Geometrical Probability, OMG) and I can't afford to dig out and read it.

Hanna M. Wallach, Conditional Random Fields: An Introduction
comment: A readable introduction to Conditional Random Fields, which I think is better to understand than John Lafferty's paper.

John Lafferty, Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data.
comment: It's a difficult paper, especially when I lack the background knowledge of NLP.

Monday, February 25, 2008

Worth Reading

Darroch, J.N. & Ratcliff, D. Generalized iterative scaling for log-linear models. 1972
comment: A quite readable paper on a special case (log-linear, kind of exponential distribution) of probability function estimation that can be done in a recursive way like EM (not exactly). It's based on the non-negtive attributes of mutual information between probabilities. So instead of maximized by expectation, it maximizes mutual information. Interesting comparison should be able to be made between this algorithm and EM. But I don't have time to do that and nobody has interest in that specific aspect, maybe.

Andrew McCallum, Maximum Entropy Markov Models for Information Extraction and Segmentation.
comment: A graphical probabilistic model like HMM but it's not generative, rather than descriptive (I name it this way). You construct this model by putting together the transfers you want, not by modeling how the symbols are generated. Yet it's as efficient as HMM.

Friday, February 22, 2008

Worth Reading

T.F Cootes, C.J. Taylor et al. Training Models of Shape from Sets of Examples. 1992

Map Projections-A working Manual, USGS Professional Paper.
comment: maps actually far more complicated than we thought.

P.L. Worthington and E.R. Hancock, "Surface Topography Using Shape-from-Shading," Pattern Recognition. 2001

William A.P. Smith, Edwin R. Hancock, "Recovering Facial Shape Using a Statistical Model of Surface Normal Direction", 2006

Tuesday, February 19, 2008

Worth Reading

Paul Debevec. Acquiring the Reflectance Field of a Human Face. SIGGRAPH 2000

comment: very interesting.


Polarized Light
http://acept.asu.edu/PiN/rdg/polarize/polarize.shtml

Monday, February 18, 2008

Try this video stuff.

Ray tracing project for the CG class, homework3


Sunday, February 17, 2008

Definition

Let X be a topological space. There exists a smallest cosi-algebra beta in X such that every open set in X belongs to beta. The members of beta are called the Borel sets of X.


comment: Borel set is not necessarily defined on R ( as I learned in advanced calculus). It's actually a special algebra defined on arbitrary topological space.

Friday, February 15, 2008

It's a shame

that every classroom has only finite number of markers, but uncountable, and the broken ones constitute a dense set among them......

Tuesday, February 12, 2008

Worth Reading

Touching Soap Films

http://page.mi.fu-berlin.de/polthier/booklet/intro.html

comment: excellent introduction to minimal surfaces.

Monday, February 11, 2008

Worth Reading

http://www.personal.soton.ac.uk/rchc/Teaching/GTPBootstrap/ExtraDetails/NelderMeadProof.pdf

comment: excellent explanation of Nelder-Mead Method

Ilyan Baran, Automatic Rigging and Animation of 3D Characters.
comment: presentation paper on CG class.

Saturday, February 09, 2008

Worth Reading

Frisken, 2000. Adaptively Sampled Distance Fields: A General Representation of Shape for Computer Graphics
comments: An octree representation of volumetric shapes. But unlike octree, the cells split only when the variation of the points in this cell is larger than a bilinear or trilinear interpolation of it's boundary points.

comments: Can we modify this representation for manifold?
Another way to think about it is that whether this representation can be extended beyond scalar field (distance field in the paper) to high dimensional field?

Friday, February 08, 2008

Subtlity of Float Inexactness in Ray Tracing


When a ray R hit a target T at point p, we may create new rays of reflection or refraction R1, R2 starting from point p, and calculate the intersection points for these R1, R2. However, due to the inexactness of floating point, the first intersecting point we obtain will be p itself with tiny perturbations. This point of course should be discarded away, otherwise some rays won't return correct value, the scene would be like this (the blue ball has may inconsistent spots on it)







The right scene should be like this:

Thursday, January 24, 2008

GLUI and Visual Studio .Net 2003





When compiling a C++ program depending on glui32.lib, I got error like this:

error LNK2005: _free already defined in LIBCD.lib(dbgheap.obj)

The problem can be solved by exclude LIBCD dependency in the project settings.

Thursday, January 17, 2008

Needs clarification



Proof of Linear Transformation of a multivariate normal distribution.

see Kingman and Taylor 1966, p.372

Should there be an erreta of missing a transpose sign?

See the red circle in this picture.

Worth Reading

Multivariable Calculus
George Cain & James Herod
http://www.math.gatech.edu/~cain/notes/calculus.html

comments: good to read for clarification of Jacobian matrix, total derivative and variable substitution thereoms.
Measure Theory surely works, but this online book is more concise and readable.

Obviously, the books I read before is a little outdated. How could they totally neglect multivariate calculus?

Monday, January 14, 2008

Human Eva, Bad motion capture data?

It seems that some of the motion data sequence, 'ThrowCatch' in HumanEva I is bad.

Sunday, January 13, 2008

Worth Reading

Y. Lipman. Differential coordinates for interactive mesh editing.
O. Sorkin. Differential representations for mesh processing.
Edilson de Aguiar. Rapid Animation of Laser-scanned Humans.

comment: an interesting representation of mesh and morphology of mesh.

Thursday, January 10, 2008

Worth Reading

Fast Human Pose Estimation using Appearance and Motion via Multi-Dimensional Boosting Regression.

comment: Another idea of me that has been worked.

My thoughts are:

The point here is using the 2D appearance and motion hue to infer the 3D state space.

Actually, because the 3D motion is a high-dimension state space and actually lies on some lower space manifold, and there are generalized regression or interpolation methods that can be used for mapping between the space and manifold. So we can just use the lower dimensional information to infer its 3D state.

This might not be an accurate method, but provides a way to speed up the motion tracking process.

The work is incomplete, and could be expanded on a lot of aspects.

Saturday, January 05, 2008

I have many ideas

But most of them have been worked on.
And lots of the rest die in my lab.

frustrating....

Wednesday, January 02, 2008

Read Rob Hess' SIFT data into MATLAB

%% read sift features from Rob Hess output
%% compatible with Rob's version version 1.1.1-20070913 (2006)
%% see imgfeatures.c in Rob's code for more details
%%
%% by smartnose@gmail.com, Jan. 2nd 2008
%%
%% INPUTS:
%% fn - full path string to the file to be read
%%
%% OUTPUTS:
%% rs - result structure
%% n: number of features
%% d: dimension of the feature vector (128 by default)
%% sv: scale and feature position vectors, each row indicate
%% y,x,scale, orientation of the feature.
%% fv: feature vectors, each row is a vector


function [rs]=import_rob_sift(fn)
if(~exist(fn,'file'))
error('file does not exist');
end

M=dlmread(fn);%M will be a N*20 matrix, with blank cells filled by 0

rs.n=M(1,1);
rs.d=M(1,2);

SV=zeros(rs.n,4);
FV=zeros(rs.n,rs.d);

ln=ceil(rs.d/20);

for ii=1:rs.n
SV(ii,:)=M((ii-1)*8+2,1:4);
%FV(ii,:)=
FT=[];
for jj=2:ln+1
FT=cat(2,FT,M((ii-1)*8+jj+1,:));
end
FV(ii,:)=FT(1,1:rs.d);
end

rs.sv=SV;
rs.fv=FV;
end