-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathProject.html
260 lines (191 loc) · 14.5 KB
/
Project.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
<link rel="icon" href="img/logo.png">
<title>Project</title>
<!-- Bootstrap -->
<link href="css/bootstrap.css" rel="stylesheet">
<link rel="stylesheet" type="text/css" href="css/style.css">
<link href="assets/css/font-awesome.min.css" rel="stylesheet">
<link href="assets/css/owl.carousel.css" rel="stylesheet">
<link href="assets/css/fancybox/jquery.fancybox.css" rel="stylesheet">
<link href="assets/css/style.css" rel="stylesheet">
</head>
<body><header>
<nav class="navbar navbar-default" style="background:#3399FF;color:#fff">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header" >
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="index.html"><img src="img/logo.png" class="img-responsive"></a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
<li><a href="index.html"><b><font color=#003366>Home</font></b></a></li>
<li><a href="People.html"><b><font color=#003366>People</font></b></a></li>
<li><a href="Research.html" ><b><font color=#003366>Research</font></b></a></li>
<li><a href="Datasets.html" ><b><font color=#003366>Datasets</font></b></a></li>
<li><a style="background:#003366;color:#fff" href="#"><b>Project</b></a></li>
<li><a href="publication.html" ><b><font color=#003366>Publications</font></b></a></li>
<li><a href="index.html#footer"><b><font color=#003366>Contact Us</font></a></li>
</ul>
</div><!-- /.navbar-collapse -->
</div><!-- /.container-fluid -->
</nav>
</header>
<div class="container-fluid">
<!-- START ABOUT SECTION-->
<section id="about" class="section wow fadeInUpBig">
<div class="container-section">
<div class="row">
<div class="section-title">
<div class="col-lg-12 col-md-12 col-sm-12">
<div class="col-lg-12 col-md-12 col-sm-12">
<a><img src="img/download-removebg-preview.png" width="50" height="40" alt="image"></a>
<h3> <font color=#003366> Timestamp aware Aberrant Detection and Analysis
in Big Visual Data using Deep Learning Architecture</font> </h3>
<h4>Science and Engineering Research Board (SERB): SERB/EEQ/2017/000673 </h4>
<h5 style="color:black"><b>Funding Agency: </b>Science and Engineering Research Board, Department of Science and Technology (SERB-DST, 2018)</h5>
<h5 style="color:black"><b>Principal Investigator: </b>Dr. Santosh Kumar Vipparthi</h5>
<h5 style="color:black"><b>JRF/Ph.D. Scholar: </b>Kuldeep Marotirao Biradar </h5>
<h5 style="color:black">
<b>Introduction: </b> The proposed system removes the onus of detecting aberrance situations from the manual operator;
and rather, places it on the video surveillance system.
The present technologies are fails to recognize aberration in video sequences. These aberrances
occur over a small-time window. Thus, recognizing with its timeframe from a big visual data is really
challenging task. Hence, our focus is on problems, where we are given a set of nominal training
videos samples. Based on these samples need to determine whether or not a test video contains an
aberration and what instant it occurs. Similarly, we aim to significantly reduce the time and human
effort by automating the task and improving the accuracy by recognizing aberrances with its
timestamp. Further, exploit the aberrance activity of the object by modeling the rich motion
patterns in selected region, effectively capturing the underlying intrinsic structure they form in the
video. Implementation of this system can be beneficial for intelligent agencies, banks, departmental stores, traffic monitoring on highway,
airport terminal check-in, sports, medical field, and robotics etc.
</h5>
</div>
</div>
</div>
</div>
<div class="section-title">
</div>
<h4 id="aboutme"><b>PROJECT ACTIVITIES AND FINDINGS</b></h4>
<h4><b>Anomaly Detection in Traffic Videos</b></h4>
<h5>
  <u><a href="http://openaccess.thecvf.com/content_CVPRW_2019/papers/AI%20City/Biradar_Challenges_in_Time-Stamp_Aware_Anomaly_Detection_in_Traffic_Videos_CVPRW_2019_paper.pdf" target="_blank" >Paper</a></u>  
  <u><a href="assets/publications/CVPR Workshops 2019.pdf" target="_blank" >PPT</a></u>  
</h5>
<a class="profile-img" href="index.html"><img src="assets/images/CVPRW paper pics.png" width="100%" height="350" alt="image"></a>
</h5>
<div class="section-title">
</div>
<h4><b>New Anamoly Dataset</b></h4>
<h5>
A custom dataset was generated in a staged/controlled environment. We shot from four strategically placed cameras
simultaneously to capture multiple views of same scene. The videos were recorded at four different locations in different times
of the day. The scenes involve normal data, fight happening in different scenarios, snatching, kidnapping etc.
Scenes were shot indoor/outdoor, in natural light-artificial light, low light as well to cover illumination changes. The videos were recorded
from varied distances to capture subjects with varying size. Post processing yielded usable clips of approx. 90 minutes (90x60x30x4= 648000 frames).
Snippet for the same are depicted in the figure.
</h5>
<a class="profile-img" href="#"><img src="assets/images/ds2.JPG" width="100%" height="350" alt="image"></a>
</h5>
<br><br>
<div class="section-title">
<h4 >PUBLICATIONS</h4>
</div>
<h5 style="color:black">
<b>1.</b>
Murari Mandal, Vansh Dhar, Abhishek Mishra, <b>Santosh Kumar Vipparthi</b>, Mohamed Abdel-Mottaleb, <a href="#"target="_blank">“3DCD: A Scene Independent End-to-End Spatiotemporal Feature Learning Framework for Change Detection in Unseen Videos,” </a>IEEE Transactions on Image Processing, 2020, <b>(<a href="#"target="_blank">PDF</a>)</b> (Impact Factor 9.34)
</h5>
<h5 style="color:black">
<b>2.</b>
Murari Mandal, <b>Santosh Kumar Vipparthi</b>, <a href="#"target="_blank">“Scene Independency Matters: An Empirical Study of Scene Dependent and Scene Independent Evaluation for CNN-Based Change Detection,” </a>IEEE Transactions on Intelligent Transportation System, 2020, <b>(<a href="https://ieeexplore.ieee.org/document/9238403"target="_blank">PDF</a>)</b> (Impact Factor 6.319)
</h5>
<h5 style="color:black">
<b>3.</b>
Murari Mandal, Lav Kush Kumar, <b>Santosh Kumar Vipparthi</b>, <a href="#"target="_blank">“MOR-UAV: A Benchmark Dataset and Baselines for Moving Object Recognition in UAV Videos,” </a> ACM Multimedia (ACMMM - 2020), <b>(<a href="https://dl.acm.org/doi/10.1145/3394171.3413934"target="_blank">PDF</a>)</b> (Core - A*)
</h5>
<h5 style="color:black">
<b>4.</b>
Murari Mandal, Vansh Dhar, Abhishek Mishra, <b> Santosh Kumar Vipparthi</b>, <a href="https://ieeexplore.ieee.org/document/8894435"target="_blank"> “3DFR: A Swift 3D Feature Reductionist Framework for Scene
Independent Change Detection,” </a> IEEE Signal Processing Letters, 2019 (Impact Factor 3.268).</h5>
<h5 style="color:black">
<b>5.</b>
Murari Mandal, Lav Kush Kumar, Mahipal Singh Saran, <b>Santosh Kumar Vipparthi</b>, <a href="#"target="_blank">“MotionRec: A Unified Deep Framework for Moving Object
Recognition,”</a> IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, Colorado, US, 2020
</h5>
<h5 style="color:black">
<b>6.</b>
Monu Verma,<b> Santosh Kumar Vipparthi</b>, Girdhari Singh, Subrahmanyam Murala,<a href="https://arxiv.org/abs/1904.09410"target="_blank"> “LEARNet:
Dynamic Imaging Network for Micro Expression Recognition,” </a>IEEE Transactions on image
processing, 2019 (Impact Factor 6.79).</h5>
<h5 style="color:black">
<b>7.</b>
Murari Mandal, Monu Verma, Sonakshi Mathur, <b>Santosh Vipparthi</b>, Subrahmanyam Murala, Kranthi Deveerasetty,
<a href="https://digital-library.theiet.org/content/journals/10.1049/iet-ipr.2018.5683"target="_blank">"RADAP: Regional Adaptive Affinitive Patterns with
Logical Operators for Facial Expression Recognition,"</a> IEEE/IET Image Processing, 2019 (Impact Factor 2.004).
</h5>
<h5 style="color:black">
<b>8.</b>Maheep Singh, Mahesh C. Govil, Emmanuel S. Pilli, <b>Santosh Kumar Vipparthi</b>,
<a href="https://ieeexplore.ieee.org/document/8863240"target="_blank">"SOD-CED: salient object detection for noisy images using convolution encoder–decoder
,"</a> IEEE/IET Computer Vision, 2019 (Impact Factor 1.648)</h5>
<h5 style="color:black">
<b>9.</b>Murari Mandal,<b> Santosh Kumar Vipparthi</b>, Mallika Chaudhary, Subramanian Murala, Anil Balaji Gonde,
S. K. Nagar,<a href="https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2018.5206"target="_blank"> "ANTIC: ANTithetic Isomeric Cluster Patterns for Medical Image Retrieval and Change Detection,"
</a> IEEE/IET Computer Vision, 2018 (Impact Factor 1.648).
</h5>
<h5 style="color:black">
<b>10.</b>
Kuldeep Marotirao Biradar, Ayushi Gupta, Murari Mandal, <b>Santosh Kumar Vipparthi</b>,<a href="http://openaccess.thecvf.com/content_CVPRW_2019/papers/AI%20City/Biradar_Challenges_in_Time-Stamp_Aware_Anomaly_Detection_in_Traffic_Videos_CVPRW_2019_paper.pdf"target="_blank">
"Challenges in Time-Stamp Aware Anomaly Detection in Traffic Videos,"
"</a> In CVPR Workshops (CVPRW), Long Beach, California, US, 2019.
</h5>
<h5 style="color:black">
<b>11.</b>Murari Mandal, Prafulla Saxena, <b>Santosh Kumar Vipparthi</b>, Subrahmanyam Murala,
<a href="https://ieeexplore.ieee.org/document/8545504"target="_blank">"CANDID: Robust Change Dynamics and Deterministic Update Policy for Dynamic Background Subtraction,"</a>
IEEE 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 2018.</h5>
<h5 style="color:black">
<b>12.</b>Monu Verma, Jaspreet Kaur Bhui, <b>Santosh Kumar Vipparthi</b>,
Girdhari Singh, <a href="https://ieeexplore.ieee.org/abstract/document/8616256"target="_blank">"EXPERTNet: Exigent Features Preservative Network for Facial
Expression Recogntion,"</a> ACM 11th International Conference on Computer Vision, Graphics and
Image Processing (ICVGIP), Hyderabad, India, 2018.
</h5>
<h5 style="color:black">
<b>13.</b>Kuldeep Biradar, Sachin Dube, <b>Santosh Kumar Vipparthi</b>, <a href="https://ieeexplore.ieee.org/document/8721378"target="_blank">
“DEAREST: Deep Convolutional Aberrant Behaviour Detection in Real world Scenario,”</a>
13th international conference on industrial and information system, Ropar, 2018
</h5>
<h5 style="color:black">
<b>14.</b>Shivangi Dwivedi, Murari Mandal, Shekhar Yadav, <b>Santosh Kumar Vipparthi</b>, <a href="https://arxiv.org/abs/1912.03000"target="_blank">
“3D CNN with Localized Residual Connections for Hyperspectral Image Classification,”</a>
4th International Conference on Computer Vision and Image Processing (CVIP), 2019
</h5>
<h5 style="color:black">
<b>15.</b>Monu Verma, Prafulla Saxena, <b>Santosh Kumar Vipparthi</b>, Girdhari Singh, S K Nagar, <a href="https://www.researchgate.net/publication/337387469_DeFINet_PORTABLE_CNN_NETWORK_FOR_FACIAL_EXPRESSION_RECOGNITION"target="_blank">
“DeFINet: Portable CNN Network for Facial Expression Recognition,”</a>
IEEE International Conference on Information and Communication Technology for Competitive Strategies, 2019
</h5>
<h5 style="color:black">
<b>16.</b>
Murari Mandal, Manal Shah, Prashant Meena, Sanhita Devi, <b>Santosh Kumar Vipparthi</b>,<a href="https://ieeexplore.ieee.org/document/8755462" target="_blank"> “AVDNet:
A Small-Sized Vehicle Detection Network for Aerial Visual Data,” </a> IEEE Geoscience and Remote Sensing Letters, doi: 10.1109/LGRS.2019.2923564 (Impact Factor 3.534)
</h5>
<h5 style="color:black">
<b>17.</b>
Murari Mandal, Manal Shah, Prashant Meena, <b>Santosh Kumar Vipparthi</b>,<a href="https://ieeexplore.ieee.org/document/8803262/" target="_blank">
“SSSDET: Simple Short and Shallow Network for Resource Efficient Vehicle Detection in Aerial Scenes.,” </a> 26th IEEE International Conference on
Image Processing (ICIP), Taipei, Taiwan, 2019
</h5>
</div>
</section>
<!-- END ABOUT SECTION-->
</html>