About Me

I am currently employed as an Associate Computer Scientist/Engineer III in the Marconi-Rosenblatt AI/ML Innovation Lab at ANDRO Computational Solutions LLC, a defense contracting firm in Rome, NY. At ANDRO, I lead a team of engineers working to build a vision-based collision avoidance and navigation subsystem for autonomous UAV applications. Our solution combines traditional vision-based autonomy components with machine learning based perception modules - reaping the benefits of robustness and generalization from both approaches. The subsystem is powered by optimized CUDA C++ code and hardware accelerated vision models deployed upon an embedded NVIDIA Jetson platform, using TensorRT for further model optimization. Link to project press release.

In 2021 I graduated with a Masters in Computer Science from University of Massachusetts at Amherst, focusing on reinforcement learning (RL). During my time at UMass, I was fortunate enough to conduct research with Dr. Shlomo Zilberstein on safe AI methods for avoiding "dead ends" in online planning and RL. My Masters work culminated with my research on offline reinforcement learning methods for learning constrained MDPs, during which I was advised by Professors Ina Fiterau & Bruno Castro da Silva. Our work, "Constrained Offline Policy Optimization", introduces a novel constrained policy projection algorithm, which, given a reward optimal policy, will find the cost-feasible policy that is closest to the provided reward maximizing policy. This work will be featured in ICML 2022.

In May of 2017 I graduated from Cornell University with a BS in Computer Science and thereafter worked full time at ANDRO, working on projects involving automatic modulation classification and multi-task learning for signal intelligence prior to working on my current autonomy project.

Some of my research interests include:

  • Sample Efficient Reinforcement Learning
  • Constrained Reinforcement Learning
  • Convex Analysis/Optimization
  • Autonomous UAV
  • Vision-based Navigation

Some of my self-study interests include:

  • Mathematical Analysis, Fourier Analysis, & Functional Analysis
  • Philosophy
  • Nutrition & Health

Projects

Deep Reinforcement Learning based Autonomous Unmanned Aeriel Vehicles

At ANDRO, I lead a team of engineers working to build a vision-based collision avoidance and navigation subsystem for autonomous UAV application for the US Navy. We use vision-based autonomy algorithms to aid in navigating a UAV through tactical environments in GPS denied or lost data link scenarios. Additionally, to enhance human-machine teaming, we have developed vision-based natural user interfaces (NUIs) which will allow humans to interface with the UAV via natural gestures (hand signals) and chromatic symbols.

Constrained Offline Policy Optimization

In this work we introduce Constrained Offline Policy Optimization (COPO), an offline policy optimization algorithm for learning in MDPs with cost constraints. COPO is built upon a novel offline cost-projection method, which we formally derive and analyze. Our method improves upon the state-of-the-art in offline constrained policy optimization by explicitly accounting for distributional shift and by offering non-asymptotic confidence bounds on the cost of a policy. These formal properties are superior to those of existing techniques, which only guarantee convergence to a point estimate. We formally analyze our method and empirically demonstrate that it achieves state-of-the-art performance on discrete and continuous control problems, while offering the aforementioned improved, stronger, and more robust theoretical guarantees.

Mitigating Overestimation and Distributional Shift in Offline Reinforcement Learning

The offline reinforcement learning setting - learning policies from a static data set - introduces challenges associated with the inability to sample data from the learned policy. Among such challenges, overestimation of state-action values and distributional shift are the most detrimental to learning policies that perform well once deployed. In this work, we introduce a novel offline learning algorithm that directly addresses both overestimation and distributional shift, without restricting value estimates to the data distribution. Our algorithm, entitled Initial and Semi-Implicit Q-Learning (ISIQL), learns using value targets constructed from a mixture of estimates from both the data distribution and the current policy's action distribution, thereby allowing for policy improvement outside of the behavior distribution when possible. Our value objective additionally incorporates an initial state-action value term, which we show, allows for mitigation of distributional shift. We motivate the use of these learning components by connecting to prior work, and show various ways a stochastic policy may be extracted from the learned value functions. Lastly, we show that the ISIQL algorithm achieves state-of-the-art performance on online MuJoCo benchmark tasks and offline D4RL data sets, most notably offering a 10% performance gain in offline locomotion and maze tasks.

Intelligent Signal Detector and Classifer

At ANDRO, our team is in the process of devloping a machine learning based RF signal detector and classifier for the Army. The envisioned finished final product should aid EM spectrum analysts in signal identification by providing a multitude of estimated signal characteristics to the user simultaneously. Our approach builds off of our automatic modulation classification (AMC) technology by extending the solution to a multi-task learning paradigm.

Publications

Journals

  • J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, T. Melodia, "Machine Learning for Wireless Communications in the Internet of Things: A Comprehensive Survey," Ad Hoc Networks (Elsevier) vol. 93, pp 101913, June 2019. [pdf][bibtex]

Conferences

  • N. Polosky, B. C. da Silva, I. Fiterau, J. Jagannath, “Constrained Offline Policy Optimization,” in International Conference on Machine Learning (ICML), Baltimore, MD, USA, July 2022. [pdf]
  • N. Polosky, T. Gwin, S. Furman, P. Barhanpurkar, J. Jagannath, “Machine Learning Subsystem for Autonomous Collision Avoidance on a small UAS with Embedded GPU,” in IEEE International Workshop on Communication and Networking for Swarms Robotics, Virtual, January 2022. [pdf][bibtex]
  • J. Jagannath, N. Polosky, D. O’Connor, L. Theagarajan, B. Sheaffer, S. Foulke, P. Varshney, “Artificial Neural Network based Automatic Modulation Classifier for Software Defined Radios,” in Proc. of IEEE International Conference on Communications (ICC), Kansas City, MO, USA, May 2018. [pdf][bibtex]
  • N. Polosky, J. Jagannath, D. O'Connor, H. Saarinen, S. Foulke, “Artificial Neural Network with Electroencephalogram Sensors for Brainwave Interpretation: Brain-Observer-Indicator Development Challenges,” in Proc. of 13th International Conference & Expo on Emerging Technologies for a Smarter World (CEWIT), Stony Brook, NY, USA, November 2017. [pdf][bibtex]
  • J. Jagannath, D. O'Connor, N. Polosky, B. Sheaffer, L. N. Theagarajan, S. Foulke, P. K. Varshney, S. P. Reichhart, "Design and Evaluation of Hierarchical Hybrid Automatic Modulation Classifier using Software Defined Radios," in Proc. of IEEE Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, January 2017. [pdf][bibtex]

Invited Book Chapters

  • J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, T. Melodia, “Neural Networks for Signal Intelligence: Theory and Practice” in Machine Learning for Future Wireless Communications, Eds. D. L. Luo, Wiley - IEEE Series, November 2019, ISBN: 9781119562252.[pdf][bibtex]

Patents

  • Vertebral assist spinal medical device. U.S. Patent Number US 10,966,757 B2. Patent Granted.
  • Machine learning framework for control of autonomous agent operating in dynamic environment. April 2020. Provisional Patent.
  • Multi-task learning neural network framework for RF spectrum sensing and classification. April 2020. Provisional Patent.

Review Service

  • Advances in Neural Information Processing Systems (NeurIPS)
  • International Conference on Machine Learning (ICML)
  • IEEE International Conference on Communications (ICC)

Contact

Email: poloskynick (at) gmail (dot) com

LinkedIn: HERE

Google Scholar: HERE

ResearchGate: HERE