![](/files/what.png)
WHO WE ARE?
![](/files/1221fa06-e7c5-4f5b-9945-0a4bd2564306.jpg)
Malek Al Lahham
Operation & Communication Manager
![](/files/7cac6278-2df3-456f-b526-c17965e29dd8.jpg)
Saeed Hany
Project Manager
![](/files/a95cd595-c576-48c2-90e0-0743f3e18fcc.jpg)
Mohamad Nagi
Architect
![](/files/ac30d10b-5eb9-4444-a3ec-34239c7514a1.jpg)
Mohamad Al Sayed
Architect
![](/files/4ca9046c-828e-4109-8282-60b5760b5136.jpg)
Mohammad Mosilli
Architect
![](/files/6c2a71e5-76e7-48e5-96aa-9786c920ae60.jpg)
Khaled Saleh
Architect
![](/files/114611de-8202-46ca-b1ae-2139e4c3d6e3.jpg)
Ibrahim Zaki
Architect
![](/files/7ebd6a50-c106-41cf-8895-5b504afc9170.jpg)
Houssam Hazem
Architect
![](/files/4ec8a193-97e8-4e36-99be-a8306892f5c1.jpg)
Ehab Hussain
3D Modelling Artist
![](/files/410ec668-dc8e-419d-9bcc-5a49fc5d27fe.jpg)
Hani Ismael
Post Production Artist
![](/files/29563a71-15db-4f65-8da9-128042d978ff.jpg)
Abdalazeez Almujahed
Financial Manager
![](/files/4e9821d9-2e54-4b97-9530-d799486223cb.jpg)
Abdullah Dosuki
Artist
![](/files/a9a2bfcc-a8bb-499e-92de-7442485c902a.jpg)
Amr Galal
Architect
![](/files/77821d9b-c207-441d-ade3-2acccf9edf51.jpg)
Samer Matar
Architect
![](/files/5351942c-d627-4cb3-bada-d56b7a57bd44.jpg)
Imad Fadloun
HR Manager
![](/files/c4ce4da5-70a9-4932-bb91-f34522336190f9c9b2.png)
Hani Edelbi
Architect
![](/files/435212a1-9bf3-4bb4-ba42-b21a3e46c9a7.jpg)
Amin Nabulsi
Architect
![](/files/2e41a6a2-3bad-4732-af8f-7552c2714fd2.png)
Amer Kouly
General Manager
Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.