AI-based Music iOS App is an advanced version of current music apps, with voice & face recognition and it will not only reduce human efforts but also aims at scanning and interpreting the data and accordingly plays music based on facial expression with the additional feature of FM radio. This project is mainly focused on designing user-friendly and developing an effortless application with songs and playlists based on your mood and even visually challenged people can use them freely.
Music plays a main role in one’s life and in the current application, the user needs to do things manually and this restricts the visually impaired and other users also. An enhanced feature for music listeners and bridging the gap between user and upcoming technology. The motive is to design and develop a voice-controlled application driven by AI, with the help of Flutter and Alan API. The main idea is to reduce manual search time and Auto Queue with the next related song and with your favorite radio stations on one go mode.
AI-based Music iOS App is an advanced version of current music apps, with voice recognition and it will not only reduce human efforts but also aims at scanning and interpreting the data and accordingly plays music based on facial expression with the additional feature of FM radio. In the pre-existing application, manually searching and segregating songs acc to one’s mood waste a lot of time Voice command feature will be a direct medium to interact with the application that will hear and identify songs within seconds.
AI-based Music iOS App is an advanced version of current music apps, with voice & face recognition and it will not only reduce human efforts but also aims at scanning and interpreting the data and accordingly plays music based on facial expression with the additional feature of FM radio. In the existing application, a person needs to sit and browse through the playlist of tunes and select tunes based on the state of mind and condition. That is the necessities of an individual, a user sporadically suffered through the requirement and desire of browsing through his playlist, consistent with his mood and emotions is organized using the playlist and different types of moods.
The current music player apps require human interaction to alter the music track for play, next to or shuffle up this limit the outwardly disabled from utilizing the app.
This application points to creating an Ai-based music player portable application, utilizing Flutter and Alan API. This permits the user to control the whole application through voice commands and permits them to search for any song. The idea is to elongate the touch interactivity between the user and the mobile. The application is built to run on both iOS and Android platforms.
III. LITERATURE SURVEY
In a particular system, Python 2.7, Open-source Computer Vision Library (OpenCV) & CK (Cohn Kanade) and CK+ (Extended Cohn-Kanade) database gave approximately 83% accuracy. Certain researchers have described the Extended Cohn-Kanade (CK+) database for those wanting to prototype and benchmark systems for automatic facial expression detection. The popularity and ease of access for the original Cohn-Kanade dataset are seen as a very valuable addition to the already existing corpora. It was also stated that for a fully automatic system to be robust for all expressions in a myriad of realistic scenarios, more data is required. For this to occur very large reliably coded datasets across a wide array of visual variabilities are required (at least 5 to 10k examples for each action) which would require a collaborative research effort from various institutions. It was observed in a cross-database experiment that raw features worked best with Logistic Regression for testing RaFD (Radboud Faces Database) database and Mobile images dataset. The accuracy achieved was 66% and 36% respectively for both using the CK+ dataset as a training set. The objective was to develop a system that can analyze the audio and predict the expression of the person. The study proved that this procedure is workable and produces valid results.
IV. EXISTING SYSTEM
V. PROPOSED SYSTEM
IOS is used by the majority of the smartphone market, a recent study concluded that the number of people using an IOS smartphone had increased. To satisfy the needs of the IOS community, new applications are constantly being developed but they also charge some amount. Music App is one such application that is used by everyone, whether in the office doing work, doing some exercise or even bathing, people always like to listen to music.
When the application is started, the user needs to give a voice command to activate and then say like play a song or play a happy song, it will recommend you the best fir song for you or stop this song. Apart from these audio features, there’s an inbuilt FM Radio with different radio stations. This is application is made using Dart and for voice recognition Alan API used. SQL database is used to store different songs and playlists. Using AI intelligent technology, the personal voice assistant for the iOS system. Visually challenged people can easily access this application by giving voice commands and it will also guide them for other features too.
A. Visually impaired people can access easily.
B. Secure and efficient system
C. Can be used hands-free while driving
D. While working out at the gym it can be used.
E. Effortless Application with on-go mode.
A. User must have knowledge of English.
B. User must have IOS to run this application.
C. It requires an internet connection
D. The max operator has at least two disadvantages. Firstly, it is only suitable for the instance-level approaches that require an instance classi?er.
IX. SOFTWARE/HARDWARE REQUIREMENTS
Operating system : Windows 7 and above.
Coding Language : Flutter
IDE : Visual Studio
System : Intel I3 and above.
Hard Disk : 200 GB.
Monitor : 15 VGA Color.
Ram : 4 GB.
X. FUTURE SCOPE
A. The proposed system will make it easier for users to enjoy music.
B. The system recommends songs on the basis of emotion.
C. System has authority to maintain the application regularly
Our main aim is to design and develop an effortless music app, a new way of accessing songs by using machine learning techniques. Our solution gives a better user preferable playlist and more attributes will improvise decision making and prediction of the song. The whole application works on your voice commands using intelligent technologies within seconds. An additional feature of FM radio helps us to be updated with our surroundings and trending things.
Implementing this prototype in current music applications can provide a better music experience to the user. The future scope in the system would design a mechanism that will be helpful in music therapy treatment and provide music therapists the help needed to treat the patients suffering from different disorders.
 Thomas Noltey, Hans Hanssony, Lucia Lo Belloz,”Communication Buses for Automotive Applications” In Proceedings of the 3rd Information Survivability Workshop (ISW-2007), Boston, Massachusetts, USA, October 2007. IEEE Computer Society.