Community Question Answering (CQA) services have gained popularity over the past years. It not just enables community members to post and answer the question but also enables general clients to seek for data from a comprehensive set of well-answered question. In any case, existing cQA forums generally give only textual answers, which are not informative enough for many questions. In this project, we propose a plan that can enrich textual answers in CQA with suitable media information. Our plan comprises three segments: answer medium selection, query generation for multimedia search, and multimedia data collection and presentation. This approach automatically figures out which kind of media data ought to be added for a textual answer. It at that point automatically gathers information from the web to enrich the answer. By processing a large set of QA pairs and adding them to a pool, our approach can enable a novel multimedia question answering (MMQA) approach as clients can find multimedia answers by coordinating their questions with those in the pool. Different from a lot of MMQA research efforts that attempt to directly answer questions with image and video data, our approach is built based on community-contributed textual answers and thus it is able to deal with more complex questions. We have conducted extensive experiments on a multi-source QA dataset. The results demonstrate the effectiveness of our approach.