Paper #1 - Revisiting Perspective Information for Efficient Crowd Counting
Key Notes
Key Notes
Key Notes
Key Notes
- Perspective-aware convolutional neural network (PACNN)
- Estimate crowd counts via the detection of each individual pedestrian
- Crowd counting is casted as estimating a continuous density function
- Represent the crowd as a group of detected individual pedestrians
- Extracting effective features from crowd images
- Utilizing various regression functions to estimate crowd counts
- Edge features, texture features
- Blue in the heatmaps indicates small perspective values while yellow indicates large values
- Generate the GT perspective maps
- Using the K-NN distance to approximate the pedestrian head size
- VGG net backbone
- Three density maps
Key Notes
- Leverage KLT tracker
- Determine the motion parameters (e.g., affine or pure translation) of local windows W from an image I to a consecutive image J
- KLT runs until no more initial features can be tracked
- Parameter - size of the window
- Inter-object occlusion, self-occlusion, exit from the picture
- Features are agglomeratively clustered into independent objects
Key Notes
- Develop effective features to describe crowd
- Different scenes have different perspective distortions, crowd distributions and lighting conditions.
- CNN Model to detect crowds
- Find Clusters of People
- Apply models to count people in each cluster
- Patches and Density Detection
Paper #4 - Comparison of Tracking Techniques on 360-Degree Videos
Evaluation Criteria
Paper #5 - Beyond Counting: Comparisons of Density Maps for Crowd Analysis Tasks - Counting, Detection, and Tracking
Key Notes
Evaluation Criteria
- Viewpoint
- Occlusion
- Deformation
- Lighting
- Scale
- Shakiness
Paper #5 - Beyond Counting: Comparisons of Density Maps for Crowd Analysis Tasks - Counting, Detection, and Tracking
Key Notes
- Counting
- Clustering of points
- Creation of Density Map
No comments:
Post a Comment