How to calculate "Average Precision and Ranking" for CBIR system
Yup that's correct. You simply add up all of your precision values and average them. This is the very definition of average precision.
Average precision is simply a single number (usually in percentage) that gives you the overall performance of an image retrieval system. The higher the value, the better the performance. Precision-Recall graphs give you more granular detail on how the system is performing, but average precision is useful when you are comparing a lot of image retrieval systems together. Instead of plotting many PR graphs to try and compare the overall performance of many retrieval systems, you can just have a table that compares all of the systems together with a single number that specifies the performance of each - namely, the average precision.
Also, it doesn't
make any sense to plot the average precision. When average precision is normally reported in scientific papers, there is no plot. just a single value! The only way I could see you plotting this is if you had a bar graph, where the y -axis denotes the average precision while the x -axis denotes which retrieval system you are comparing. The higher the bar, the better the accuracy. However, a table showing all of the different retrieval systems, each with their average precision is more than suitable. This is what is customarily done in most CBIR research papers.
To address your other question, you calculate the average rank by using the average precision. Calculate the average precision for all of your retrieval systems you are testing, then sort them based on this average precision. Systems that have higher average precision will be ranked higher.