Today, data is the most important resource affecting all walks of our lives - business, healthcare, finance, manufacturing, education, entertainment, etc. Once data is generated, we need to worry about how to store them. To protect data against possible local failures and hacker attacks and to maximally utilize available storage spaces, data is stored over a distributed network. Important issues in distributed storage include download speed, repair speed given local failures, storage space efficiency, privacy guarantee and ability to withstand hacker attacks, etc. We utilize tools and insights from mathematics, communication theory, information theory, statistics and error correction theory to help construct the best distributed storage.

Data processing is another crucial issue. Machine learning is a type of data processing where necessary information gets extracted from massive data using modern learning methods and built into critical autonomous decision making processes like in the self-driving car and smart factory. Machine learning based on DNN has found great recent success in many applications. However, DNN requires exorbitant amounts of training data as well as massively scaled neural nets. Often both the training data and computing resources are widely scattered in the network. It thus becomes essential to be able to train and operate a neural net consisting of multiple parts that exist at different locations over the network in performing a given job. Distributed ways of learning or computing are critical in this sense and represent a new and exciting area of research.

Another imminent issue in machine learning is complexity and energy consumption. In certain applications, learning has to take place using a machine with limited computational capability under finite energy or power resource without resorting to powerful general-purpose GPUs. In this sense, hardware-friendly learning targeting low computational load and energy/power consumption is of great practical interest.

Computing power, communication bandwidth and storage space are resources that can be traded with one another in operating a powerful distributed machine learning algorithm. What are the mathematical laws behind the optimal tradeoff? The practices and theories of machine learning, modern communication and distributed storage all come together in answering this question.

Quick Menu

Distributed Storage System Artificial Intelligence Multi-Modal Learning
Distributed Computing 3D NAND Flash 5G Communications
System Error-Correction Codes

  • Distributed Storage System

    Distributed storage system is a network of storage nodes to store data reliably over a long period of time. We focus on network coding for distributed storage, considering bandwidth efficiency, latency and security issues. [4-7]

  • Artificial Intelligence

    We focus on developing clustered neural network with an external memory for high speed/adaptive meta learning. Both theoretical and experimental analyses are ongoing. [13]

  • Multi-Modal Learning

    Multi-modal learning system combines signals from various types of sensors to derive optimal solutions for extremely complex applications (e.g. autonomous vehicle). We focus on developing hardware-friendly multi-modal learning algorithms based on the clustered structure of neural network, distributed processors and information exchange modules.

  • Distributed Computing

    Coding for distributed computing supports low-latency computation by relieving the burden of straggling workers. We propose a hierarchical coding scheme, motivated by the need to reflect the architectures of real-world distributed computing systems. We show that our scheme outperforms existing schemes in many practical scenarios[14].

  • 3D NAND Flash

    Recently, 3D NAND flash has been introduced to replace 2D NAND flash. We investigate equalization techniques and error-correction coding algorithms to reduce the error rate of 3D NAND flash.

  • 5G Communications

    Pilot Reuse Strategies in Massive MIMO Systems. We proposed pilot reuse methods which effectively mitigate pilot contamination and increase the net throughput. [1,11]

    New Waveform Design for 5G low-latency Communications. We investigate ways of improving OFDM by combining windowing and filtering. [2]

    Non-orthogonal multiple access (NOMA) allows to support massive number of users with high data rates. Our research focuses on optimizing and improving various NOMA systems [15].

    VOD users usually request various video qualities. We proposed an efficient video caching scheme for these users. The video delivery policy which allows dynamic streaming was also proposed [16].

  • System Error-Correction Codes (ECCs)
    Error-correction codes (ECCs) for storage applications are under study with a particular focus on the tradeoffs and efficient allocation of resources throughout the system. [3, 8-10, 12]

  • References

    [1] J. Sohn, S. W. Yoon and J. Moon, "Pilot Reuse Strategy Maximizing the Weighted-Sum-Rate in Massive MIMO Systems," IEEE Journal on Selected Areas in Communications, vol. 35, no. 8, pp. 1728-1740, Aug. 2017.
    [2] D.-J. Han, J. Moon, D. Kim, S.-Y. Chung and Y. Lee, "Combined Subband-Subcarrier Spectral Shaping in Multi-Carrier Modulation under the Excess Frame Length Constraint," IEEE Journal on Selected Areas in Communications, vol. 35, no. 6, pp. 1339-1352, June 2017.
    [3] S. W. Yoon and J. Moon, "Low-Complexity Concatenated Polar Codes with Excellent Scaling Behavior," 2017 IEEE International Conference on Communications (ICC) Workshop on Channel Coding for 5G and Future Networks, May. 2017.
    [4] H. Park and J. Moon, "Improving Read Access Time of High-Performance Solid-State Drives via Layered Coding Schemes," 2017 IEEE International Conference on Communications (ICC), May. 2017.
    [5] J. Sohn, B. Choi, S. W. Yoon and J. Moon, "Capacity of Clustered Distributed Storage," accepted for publication in IEEE Transactions on Information Theory, Apr. 2018 (conference version won the Best Paper Award of IEEE ICC 2017).
    [6] B. Choi, J. Sohn, S. W. Yoon and J. Moon, "Secure Clustered Distributed Storage Against Eavesdroppers," 2017 IEEE International Conference on Communications (ICC), May. 2017.
    [7] D. Lee, H. Park and J. Moon, "Reducing repair-bandwidth using codes based on factor graphs," 2016 IEEE International Conference on Communications (ICC), Kuala Lumpur, May. 2016.
    [8] S. Kang, J. Moon, J. Ha and J. Shin, "Breaking the Trapping Sets in LDPC Codes: Check Node Removal and Collaborative Decoding," in IEEE Transactions on Communications, Jan. 2016.
    [9] J. Oh, J. Ha, H. Park and J. Moon, "RS-LDPC Concatenated Coding for the Modern Tape Storage Channel," in IEEE Transactions on Communications, Jan. 2016.
    [10] S. W. Yoon and J. Moon, "Two-Dimensional Error-Pattern-Correcting Codes," in IEEE Transactions on Communications, Aug. 2015.
    [11] J. Y. Sohn, S. W. Yoon and J. Moon, "When pilots should not be reused across interfering cells in massive MIMO," 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015 (won the Qualcomm Innovation Award in 2015).
    [12] G. Yu and J. Moon, "Concatenated Raptor Codes in NAND Flash Memory," in IEEE Journal on Selected Areas in Comm., May 2014.
    [13] S. W. Yoon, J. Seo and J. Moon "Meta learner with linear nulling," arXiv preprint arXiv:1806.01010, 2018.
    [14] H. Park, K. Lee, J. Sohn, C. Suh and J. Moon, “Hierarchical Coding for Distributed Computing,” 2018 IEEE International Symposium on Information Theory (ISIT), June 2018.
    [15] M. Choi, D.-J. Han and J. Moon, “Bi-Directional Cooperative NOMA without Full CSIT,” accepted for publication in IEEE Transactions on Wireless Communications, Aug. 2018.
    [16] M. Choi, J. Kim and J. Moon, “Wireless Video Caching and Dynamic Streaming under Differentiated Quality Requirements,” accepted for publication in IEEE Journal on Selected Areas in Communications, April 2018.

  • Copyright © Moon Lab., 2017
    School of Electrical Engineering, Korea Advanced Institute of Science and Technology
    291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea